Skip to content

Commit

Permalink
[Doc] update documents links to main branch (open-mmlab#1733)
Browse files Browse the repository at this point in the history
* [Doc] update documents links to main branch

* fix readme

* fix readme_zh

* fix docs

* fix migration

* fix migration

* modify to latest

* update changelog
  • Loading branch information
Z-Fran authored Apr 6, 2023
1 parent 5f5a757 commit 6759190
Show file tree
Hide file tree
Showing 33 changed files with 100 additions and 106 deletions.
54 changes: 27 additions & 27 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,17 +19,17 @@
<div>&nbsp;</div>

[![PyPI](https://badge.fury.io/py/mmedit.svg)](https://pypi.org/project/mmedit/)
[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmediting.readthedocs.io/en/1.x/)
[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmediting.readthedocs.io/en/latest/)
[![badge](https://github.com/open-mmlab/mmediting/workflows/build/badge.svg)](https://github.com/open-mmlab/mmediting/actions)
[![codecov](https://codecov.io/gh/open-mmlab/mmediting/branch/master/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmediting)
[![license](https://img.shields.io/github/license/open-mmlab/mmediting.svg)](https://github.com/open-mmlab/mmediting/blob/1.x/LICENSE)
[![license](https://img.shields.io/github/license/open-mmlab/mmediting.svg)](https://github.com/open-mmlab/mmediting/blob/main/LICENSE)
[![open issues](https://isitmaintained.com/badge/open/open-mmlab/mmediting.svg)](https://github.com/open-mmlab/mmediting/issues)
[![issue resolution](https://isitmaintained.com/badge/resolution/open-mmlab/mmediting.svg)](https://github.com/open-mmlab/mmediting/issues)

[📘Documentation](https://mmediting.readthedocs.io/en/1.x/) |
[🛠️Installation](https://mmediting.readthedocs.io/en/1.x/get_started/install.html) |
[📊Model Zoo](https://mmediting.readthedocs.io/en/1.x/model_zoo/overview.html) |
[🆕Update News](https://mmediting.readthedocs.io/en/1.x/changelog.html) |
[📘Documentation](https://mmediting.readthedocs.io/en/latest/) |
[🛠️Installation](https://mmediting.readthedocs.io/en/latest/get_started/install.html) |
[📊Model Zoo](https://mmediting.readthedocs.io/en/latest/model_zoo/overview.html) |
[🆕Update News](https://mmediting.readthedocs.io/en/latest/changelog.html) |
[🚀Ongoing Projects](https://github.com/open-mmlab/mmediting/projects) |
[🤔Reporting Issues](https://github.com/open-mmlab/mmediting/issues)

Expand Down Expand Up @@ -85,7 +85,7 @@ Currently, MMEditing support multiple image and video generation/editing tasks.

https://user-images.githubusercontent.com/12782558/217152698-49169038-9872-4200-80f7-1d5f7613afd7.mp4

The best practice on our main 1.x branch works with **Python 3.8+** and **PyTorch 1.9+**.
The best practice on our main branch works with **Python 3.8+** and **PyTorch 1.9+**.

### ✨ Major features

Expand All @@ -99,7 +99,7 @@ The best practice on our main 1.x branch works with **Python 3.8+** and **PyTorc

- **New Modular Design for Flexible Combination**

We decompose the editing framework into different modules and one can easily construct a customized editor framework by combining different modules. Specifically, a new design for complex loss modules is proposed for customizing the links between modules, which can achieve flexible combinations among different modules.(Tutorial for [losses](https://mmediting.readthedocs.io/en/dev-1.x/howto/losses.html))
We decompose the editing framework into different modules and one can easily construct a customized editor framework by combining different modules. Specifically, a new design for complex loss modules is proposed for customizing the links between modules, which can achieve flexible combinations among different modules.(Tutorial for [losses](https://mmediting.readthedocs.io/en/latest/howto/losses.html))

- **Efficient Distributed Training**

Expand Down Expand Up @@ -142,7 +142,7 @@ mim install 'mmcv>=2.0.0'
Install MMEditing from source.

```shell
git clone -b 1.x https://github.com/open-mmlab/mmediting.git
git clone https://github.com/open-mmlab/mmediting.git
cd mmediting
pip3 install -e .
```
Expand Down Expand Up @@ -321,7 +321,7 @@ Please see [quick run](docs/en/get_started/quick_run.md) and [inference](docs/en
</tbody>
</table>

Please refer to [model_zoo](https://mmediting.readthedocs.io/en/1.x/model_zoo/overview.html) for more details.
Please refer to [model_zoo](https://mmediting.readthedocs.io/en/latest/model_zoo/overview.html) for more details.

<p align="right"><a href="#top">🔝Back to top</a></p>

Expand Down Expand Up @@ -362,24 +362,24 @@ Please refer to [LICENSES](LICENSE) for the careful check, if you are using our
## 🏗️ ️OpenMMLab Family

- [MMEngine](https://github.com/open-mmlab/mmengine): OpenMMLab foundational library for training deep learning models.
- [MMCV](https://github.com/open-mmlab/mmcv/tree/2.x): OpenMMLab foundational library for computer vision.
- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.
- [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.
- [MMClassification](https://github.com/open-mmlab/mmclassification/tree/1.x): OpenMMLab image classification toolbox and benchmark.
- [MMDetection](https://github.com/open-mmlab/mmdetection/tree/3.x): OpenMMLab detection toolbox and benchmark.
- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d/tree/1.x): OpenMMLab's next-generation platform for general 3D object detection.
- [MMRotate](https://github.com/open-mmlab/mmrotate/tree/1.x): OpenMMLab rotated object detection toolbox and benchmark.
- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation/tree/1.x): OpenMMLab semantic segmentation toolbox and benchmark.
- [MMOCR](https://github.com/open-mmlab/mmocr/tree/1.x): OpenMMLab text detection, recognition, and understanding toolbox.
- [MMPose](https://github.com/open-mmlab/mmpose/tree/1.x): OpenMMLab pose estimation toolbox and benchmark.
- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d/tree/1.x): OpenMMLab 3D human parametric model toolbox and benchmark.
- [MMSelfSup](https://github.com/open-mmlab/mmselfsup/tree/1.x): OpenMMLab self-supervised learning toolbox and benchmark.
- [MMRazor](https://github.com/open-mmlab/mmrazor/tree/1.x): OpenMMLab model compression toolbox and benchmark.
- [MMFewShot](https://github.com/open-mmlab/mmfewshot/tree/1.x): OpenMMLab fewshot learning toolbox and benchmark.
- [MMAction2](https://github.com/open-mmlab/mmaction2/tree/1.x): OpenMMLab's next-generation action understanding toolbox and benchmark.
- [MMTracking](https://github.com/open-mmlab/mmtracking/tree/1.x): OpenMMLab video perception toolbox and benchmark.
- [MMFlow](https://github.com/open-mmlab/mmflow/tree/1.x): OpenMMLab optical flow toolbox and benchmark.
- [MMEditing](https://github.com/open-mmlab/mmediting/tree/1.x): OpenMMLab image and video editing toolbox.
- [MMGeneration](https://github.com/open-mmlab/mmgeneration/tree/1.x): OpenMMLab image and video generative models toolbox.
- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark.
- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.
- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.
- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.
- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.
- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.
- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.
- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.
- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.
- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.
- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.
- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.
- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.
- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.
- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.
- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox.
- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.

<p align="right"><a href="#top">🔝Back to top</a></p>
16 changes: 8 additions & 8 deletions README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,17 +19,17 @@
<div>&nbsp;</div>

[![PyPI](https://badge.fury.io/py/mmedit.svg)](https://pypi.org/project/mmedit/)
[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmediting.readthedocs.io/zh_CN/1.x/)
[![docs](https://img.shields.io/badge/docs-latest-blue)](https://mmediting.readthedocs.io/zh_CN/latest/)
[![badge](https://github.com/open-mmlab/mmediting/workflows/build/badge.svg)](https://github.com/open-mmlab/mmediting/actions)
[![codecov](https://codecov.io/gh/open-mmlab/mmediting/branch/master/graph/badge.svg)](https://codecov.io/gh/open-mmlab/mmediting)
[![license](https://img.shields.io/github/license/open-mmlab/mmediting.svg)](https://github.com/open-mmlab/mmediting/blob/1.x/LICENSE)
[![license](https://img.shields.io/github/license/open-mmlab/mmediting.svg)](https://github.com/open-mmlab/mmediting/blob/main/LICENSE)
[![open issues](https://isitmaintained.com/badge/open/open-mmlab/mmediting.svg)](https://github.com/open-mmlab/mmediting/issues)
[![issue resolution](https://isitmaintained.com/badge/resolution/open-mmlab/mmediting.svg)](https://github.com/open-mmlab/mmediting/issues)

[📘使用文档](https://mmediting.readthedocs.io/en/1.x/) |
[🛠️安装教程](https://mmediting.readthedocs.io/zh_CN/1.x/get_started/install.html) |
[📊模型库](https://mmediting.readthedocs.io/zh_CN/1.x/model_zoo/overview.html) |
[🆕更新记录](https://mmediting.readthedocs.io/zh_CN/1.x/changelog.html) |
[📘使用文档](https://mmediting.readthedocs.io/zh_CN/latest/) |
[🛠️安装教程](https://mmediting.readthedocs.io/zh_CN/latest/get_started/install.html) |
[📊模型库](https://mmediting.readthedocs.io/zh_CN/latest/model_zoo/overview.html) |
[🆕更新记录](https://mmediting.readthedocs.io/zh_CN/latest/changelog.html) |
[🚀进行中的项目](https://github.com/open-mmlab/mmediting/projects) |
[🤔提出问题](https://github.com/open-mmlab/mmediting/issues)

Expand Down Expand Up @@ -139,7 +139,7 @@ mim install 'mmcv>=2.0.0'
从源码安装 MMEditing

```
git clone -b 1.x https://github.com/open-mmlab/mmediting.git
git clone https://github.com/open-mmlab/mmediting.git
cd mmediting
pip3 install -e .
```
Expand Down Expand Up @@ -318,7 +318,7 @@ pip3 install -e .
</tbody>
</table>

请参考[模型库](https://mmediting.readthedocs.io/zh_CN/1.x/model_zoo/overview.html)了解详情。
请参考[模型库](https://mmediting.readthedocs.io/zh_CN/latest/model_zoo/overview.html)了解详情。

<p align="right"><a href="#top">🔝返回顶部</a></p>

Expand Down
2 changes: 1 addition & 1 deletion configs/disco_diffusion/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ save_image(image, "image.png")

## Tutorials

Considering that `disco-diffusion` contains many adjustable parameters, we provide users with a [jupyter-notebook](./tutorials.ipynb) / [colab](https://githubtocolab.com/open-mmlab/mmediting/blob/dev-1.x/configs/disco_diffusion/tutorials.ipynb) tutorial that exhibits the meaning of different parameters, and gives results corresponding to adjustment.
Considering that `disco-diffusion` contains many adjustable parameters, we provide users with a [jupyter-notebook](./tutorials.ipynb) / [colab](https://githubtocolab.com/open-mmlab/mmediting/blob/main/configs/disco_diffusion/tutorials.ipynb) tutorial that exhibits the meaning of different parameters, and gives results corresponding to adjustment.
Refer to [Disco Sheet](https://docs.google.com/document/d/1l8s7uS2dGqjztYSjPpzlmXLjl5PM3IGkRWI3IiCuK7g/edit).

## Credits
Expand Down
2 changes: 1 addition & 1 deletion configs/disco_diffusion/README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ save_image(image, "image.png")

## 教程

考虑到`disco-diffusion`包含许多可调整的参数,我们为用户提供了一个[jupyter-notebook](./tutorials.ipynb)/[colab](https://githubtocolab.com/open-mmlab/mmediting/blob/dev-1.x/configs/disco_diffusion/tutorials.ipynb)的教程,展示了不同参数的含义,并给出相应的调整结果。
考虑到`disco-diffusion`包含许多可调整的参数,我们为用户提供了一个[jupyter-notebook](./tutorials.ipynb)/[colab](https://githubtocolab.com/open-mmlab/mmediting/blob/main/configs/disco_diffusion/tutorials.ipynb)的教程,展示了不同参数的含义,并给出相应的调整结果。
请参考[Disco Sheet](https://docs.google.com/document/d/1l8s7uS2dGqjztYSjPpzlmXLjl5PM3IGkRWI3IiCuK7g/edit)

## 鸣谢
Expand Down
2 changes: 1 addition & 1 deletion configs/disco_diffusion/tutorials.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@
"# Install mmediting from source\n",
"%cd /content/\n",
"!rm -rf mmediting\n",
"!git clone -b dev-1.x https://github.com/open-mmlab/mmediting.git \n",
"!git clone https://github.com/open-mmlab/mmediting.git \n",
"%cd mmediting\n",
"!pip install -r requirements.txt\n",
"!pip install -e ."
Expand Down
2 changes: 1 addition & 1 deletion configs/inst_colorization/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ You can use the following commands to colorize an image.
python demo/colorization_demo.py configs/inst_colorization/inst-colorizatioon_full_official_cocostuff-256x256.py https://download.openmmlab.com/mmediting/inst_colorization/inst-colorizatioon_full_official_cocostuff-256x256-5b9d4eee.pth input.jpg output.jpg
```

For more demos, you can refer to [Tutorial 3: inference with pre-trained models](https://mmediting.readthedocs.io/en/1.x/user_guides/3_inference.html).
For more demos, you can refer to [Tutorial 3: inference with pre-trained models](https://mmediting.readthedocs.io/en/latest/user_guides/3_inference.html).

</details>

Expand Down
2 changes: 1 addition & 1 deletion configs/inst_colorization/README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Image colorization is inherently an ill-posed problem with multi-modal uncertain
python demo/colorization_demo.py configs/inst_colorization/inst-colorizatioon_full_official_cocostuff-256x256.py https://download.openmmlab.com/mmediting/inst_colorization/inst-colorizatioon_full_official_cocostuff-256x256-5b9d4eee.pth input.jpg output.jpg
```

更多细节可以参考 [Tutorial 3: inference with pre-trained models](https://mmediting.readthedocs.io/en/1.x/user_guides/3_inference.html)
更多细节可以参考 [Tutorial 3: inference with pre-trained models](https://mmediting.readthedocs.io/en/latest/user_guides/3_inference.html)

</details>

Expand Down
2 changes: 1 addition & 1 deletion demo/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# MMEditing Demo

There are some mmediting demos in this folder. We provide python command line usage here to run these demos and more guidance could also be found in the [documentation](https://mmediting.readthedocs.io/en/dev-1.x/user_guides/3_inference.html)
There are some mmediting demos in this folder. We provide python command line usage here to run these demos and more guidance could also be found in the [documentation](https://mmediting.readthedocs.io/en/latest/user_guides/3_inference.html)

Table of contents:

Expand Down
2 changes: 1 addition & 1 deletion demo/mmediting_inference_tutorial.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -318,7 +318,7 @@
"\n",
"Next we describe how to perform inference with python code snippets.\n",
"\n",
"(We also provide command line interface for you to do inference by running mmediting_inference_demo.py. The usage of this interface could be found in [README.md](./README.md) and more guidance could be found in the [documentation](https://mmediting.readthedocs.io/en/dev-1.x/user_guides/3_inference.html#).)\n"
"(We also provide command line interface for you to do inference by running mmediting_inference_demo.py. The usage of this interface could be found in [README.md](./README.md) and more guidance could be found in the [documentation](https://mmediting.readthedocs.io/en/latest/user_guides/3_inference.html#).)\n"
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion docker/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ RUN apt-get update && apt-get install -y git ninja-build libglib2.0-0 libsm6 lib

# Install mmediting
RUN conda clean --all
RUN git clone -b 1.x https://github.com/open-mmlab/mmediting.git /mmediting
RUN git clone https://github.com/open-mmlab/mmediting.git /mmediting
WORKDIR /mmediting
ENV FORCE_CUDA="1"
RUN pip install openmim
Expand Down
2 changes: 1 addition & 1 deletion docs/en/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -227,4 +227,4 @@ MMEditing 1.0.0rc0 is the first version of MMEditing 1.x, a part of the OpenMMLa

Built upon the new [training engine](https://github.com/open-mmlab/mmengine), MMEditing 1.x unifies the interfaces of dataset, models, evaluation, and visualization.

And there are some BC-breaking changes. Please check [the migration tutorial](https://mmediting.readthedocs.io/en/1.x/migration/overview.html) for more details.
And there are some BC-breaking changes. Please check [the migration tutorial](https://mmediting.readthedocs.io/en/latest/migration/overview.html) for more details.
4 changes: 2 additions & 2 deletions docs/en/community/projects.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,11 +18,11 @@ You can copy and create your own project from the [example project](../../../pro

We also provide some documentation listed below for your reference:

- [Contribution Guide](https://mmediting.readthedocs.io/en/dev-1.x/community/contributing.html)
- [Contribution Guide](https://mmediting.readthedocs.io/en/latest/community/contributing.html)

The guides for new contributors about how to add your projects to MMEditing.

- [New Model Guide](https://mmediting.readthedocs.io/en/dev-1.x/howto/models.html)
- [New Model Guide](https://mmediting.readthedocs.io/en/latest/howto/models.html)

The documentation of adding new models.

Expand Down
6 changes: 3 additions & 3 deletions docs/en/get_started/install.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ In this section, you will know about:

## Installation

We recommend that users follow our [Best practices](#best-practices) to install MMEditing 1.x.
We recommend that users follow our [Best practices](#best-practices) to install MMEditing.
However, the whole process is highly customizable. See [Customize installation](#customize-installation) section for more information.

### Prerequisites
Expand Down Expand Up @@ -69,11 +69,11 @@ mim install 'mmcv>=2.0.0'
pip install git+https://github.com/open-mmlab/mmengine.git
```

**Step 2.** Install MMEditing 1.x .
**Step 2.** Install MMEditing.
Install [MMEditing](https://github.com/open-mmlab/mmediting) from the source code.

```shell
git clone -b 1.x https://github.com/open-mmlab/mmediting.git
git clone https://github.com/open-mmlab/mmediting.git
cd mmediting
pip3 install -e . -v
```
Expand Down
2 changes: 1 addition & 1 deletion docs/en/howto/dataset.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ In this document, we will introduce the design of each datasets in MMEditing and

## Supported Data Format

In 1.x version of MMEditing, all datasets are inherited from `BaseDataset`.
In MMEditing, all datasets are inherited from `BaseDataset`.
Each dataset load the list of data info (e.g., data path) by `load_data_list`.
In `__getitem__`, `prepare_data` is called to get the preprocessed data.
In `prepare_data`, data loading pipeline consists of the following steps:
Expand Down
Loading

0 comments on commit 6759190

Please sign in to comment.