diff --git a/README.md b/README.md
index e6ea57a799..da4ccae0f6 100644
--- a/README.md
+++ b/README.md
@@ -19,17 +19,17 @@
[](https://pypi.org/project/mmedit/)
-[](https://mmediting.readthedocs.io/en/1.x/)
+[](https://mmediting.readthedocs.io/en/latest/)
[](https://github.com/open-mmlab/mmediting/actions)
[](https://codecov.io/gh/open-mmlab/mmediting)
-[](https://github.com/open-mmlab/mmediting/blob/1.x/LICENSE)
+[](https://github.com/open-mmlab/mmediting/blob/main/LICENSE)
[](https://github.com/open-mmlab/mmediting/issues)
[](https://github.com/open-mmlab/mmediting/issues)
-[📘Documentation](https://mmediting.readthedocs.io/en/1.x/) |
-[🛠️Installation](https://mmediting.readthedocs.io/en/1.x/get_started/install.html) |
-[📊Model Zoo](https://mmediting.readthedocs.io/en/1.x/model_zoo/overview.html) |
-[🆕Update News](https://mmediting.readthedocs.io/en/1.x/changelog.html) |
+[📘Documentation](https://mmediting.readthedocs.io/en/latest/) |
+[🛠️Installation](https://mmediting.readthedocs.io/en/latest/get_started/install.html) |
+[📊Model Zoo](https://mmediting.readthedocs.io/en/latest/model_zoo/overview.html) |
+[🆕Update News](https://mmediting.readthedocs.io/en/latest/changelog.html) |
[🚀Ongoing Projects](https://github.com/open-mmlab/mmediting/projects) |
[🤔Reporting Issues](https://github.com/open-mmlab/mmediting/issues)
@@ -85,7 +85,7 @@ Currently, MMEditing support multiple image and video generation/editing tasks.
https://user-images.githubusercontent.com/12782558/217152698-49169038-9872-4200-80f7-1d5f7613afd7.mp4
-The best practice on our main 1.x branch works with **Python 3.8+** and **PyTorch 1.9+**.
+The best practice on our main branch works with **Python 3.8+** and **PyTorch 1.9+**.
### ✨ Major features
@@ -99,7 +99,7 @@ The best practice on our main 1.x branch works with **Python 3.8+** and **PyTorc
- **New Modular Design for Flexible Combination**
- We decompose the editing framework into different modules and one can easily construct a customized editor framework by combining different modules. Specifically, a new design for complex loss modules is proposed for customizing the links between modules, which can achieve flexible combinations among different modules.(Tutorial for [losses](https://mmediting.readthedocs.io/en/dev-1.x/howto/losses.html))
+ We decompose the editing framework into different modules and one can easily construct a customized editor framework by combining different modules. Specifically, a new design for complex loss modules is proposed for customizing the links between modules, which can achieve flexible combinations among different modules.(Tutorial for [losses](https://mmediting.readthedocs.io/en/latest/howto/losses.html))
- **Efficient Distributed Training**
@@ -142,7 +142,7 @@ mim install 'mmcv>=2.0.0'
Install MMEditing from source.
```shell
-git clone -b 1.x https://github.com/open-mmlab/mmediting.git
+git clone https://github.com/open-mmlab/mmediting.git
cd mmediting
pip3 install -e .
```
@@ -321,7 +321,7 @@ Please see [quick run](docs/en/get_started/quick_run.md) and [inference](docs/en
-Please refer to [model_zoo](https://mmediting.readthedocs.io/en/1.x/model_zoo/overview.html) for more details.
+Please refer to [model_zoo](https://mmediting.readthedocs.io/en/latest/model_zoo/overview.html) for more details.
🔝Back to top
@@ -362,24 +362,24 @@ Please refer to [LICENSES](LICENSE) for the careful check, if you are using our
## 🏗️ ️OpenMMLab Family
- [MMEngine](https://github.com/open-mmlab/mmengine): OpenMMLab foundational library for training deep learning models.
-- [MMCV](https://github.com/open-mmlab/mmcv/tree/2.x): OpenMMLab foundational library for computer vision.
+- [MMCV](https://github.com/open-mmlab/mmcv): OpenMMLab foundational library for computer vision.
- [MIM](https://github.com/open-mmlab/mim): MIM installs OpenMMLab packages.
-- [MMClassification](https://github.com/open-mmlab/mmclassification/tree/1.x): OpenMMLab image classification toolbox and benchmark.
-- [MMDetection](https://github.com/open-mmlab/mmdetection/tree/3.x): OpenMMLab detection toolbox and benchmark.
-- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d/tree/1.x): OpenMMLab's next-generation platform for general 3D object detection.
-- [MMRotate](https://github.com/open-mmlab/mmrotate/tree/1.x): OpenMMLab rotated object detection toolbox and benchmark.
-- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation/tree/1.x): OpenMMLab semantic segmentation toolbox and benchmark.
-- [MMOCR](https://github.com/open-mmlab/mmocr/tree/1.x): OpenMMLab text detection, recognition, and understanding toolbox.
-- [MMPose](https://github.com/open-mmlab/mmpose/tree/1.x): OpenMMLab pose estimation toolbox and benchmark.
-- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d/tree/1.x): OpenMMLab 3D human parametric model toolbox and benchmark.
-- [MMSelfSup](https://github.com/open-mmlab/mmselfsup/tree/1.x): OpenMMLab self-supervised learning toolbox and benchmark.
-- [MMRazor](https://github.com/open-mmlab/mmrazor/tree/1.x): OpenMMLab model compression toolbox and benchmark.
-- [MMFewShot](https://github.com/open-mmlab/mmfewshot/tree/1.x): OpenMMLab fewshot learning toolbox and benchmark.
-- [MMAction2](https://github.com/open-mmlab/mmaction2/tree/1.x): OpenMMLab's next-generation action understanding toolbox and benchmark.
-- [MMTracking](https://github.com/open-mmlab/mmtracking/tree/1.x): OpenMMLab video perception toolbox and benchmark.
-- [MMFlow](https://github.com/open-mmlab/mmflow/tree/1.x): OpenMMLab optical flow toolbox and benchmark.
-- [MMEditing](https://github.com/open-mmlab/mmediting/tree/1.x): OpenMMLab image and video editing toolbox.
-- [MMGeneration](https://github.com/open-mmlab/mmgeneration/tree/1.x): OpenMMLab image and video generative models toolbox.
+- [MMClassification](https://github.com/open-mmlab/mmclassification): OpenMMLab image classification toolbox and benchmark.
+- [MMDetection](https://github.com/open-mmlab/mmdetection): OpenMMLab detection toolbox and benchmark.
+- [MMDetection3D](https://github.com/open-mmlab/mmdetection3d): OpenMMLab's next-generation platform for general 3D object detection.
+- [MMRotate](https://github.com/open-mmlab/mmrotate): OpenMMLab rotated object detection toolbox and benchmark.
+- [MMSegmentation](https://github.com/open-mmlab/mmsegmentation): OpenMMLab semantic segmentation toolbox and benchmark.
+- [MMOCR](https://github.com/open-mmlab/mmocr): OpenMMLab text detection, recognition, and understanding toolbox.
+- [MMPose](https://github.com/open-mmlab/mmpose): OpenMMLab pose estimation toolbox and benchmark.
+- [MMHuman3D](https://github.com/open-mmlab/mmhuman3d): OpenMMLab 3D human parametric model toolbox and benchmark.
+- [MMSelfSup](https://github.com/open-mmlab/mmselfsup): OpenMMLab self-supervised learning toolbox and benchmark.
+- [MMRazor](https://github.com/open-mmlab/mmrazor): OpenMMLab model compression toolbox and benchmark.
+- [MMFewShot](https://github.com/open-mmlab/mmfewshot): OpenMMLab fewshot learning toolbox and benchmark.
+- [MMAction2](https://github.com/open-mmlab/mmaction2): OpenMMLab's next-generation action understanding toolbox and benchmark.
+- [MMTracking](https://github.com/open-mmlab/mmtracking): OpenMMLab video perception toolbox and benchmark.
+- [MMFlow](https://github.com/open-mmlab/mmflow): OpenMMLab optical flow toolbox and benchmark.
+- [MMEditing](https://github.com/open-mmlab/mmediting): OpenMMLab image and video editing toolbox.
+- [MMGeneration](https://github.com/open-mmlab/mmgeneration): OpenMMLab image and video generative models toolbox.
- [MMDeploy](https://github.com/open-mmlab/mmdeploy): OpenMMLab model deployment framework.
🔝Back to top
diff --git a/README_zh-CN.md b/README_zh-CN.md
index 411919ded0..125c6e255a 100644
--- a/README_zh-CN.md
+++ b/README_zh-CN.md
@@ -19,17 +19,17 @@
[](https://pypi.org/project/mmedit/)
-[](https://mmediting.readthedocs.io/zh_CN/1.x/)
+[](https://mmediting.readthedocs.io/zh_CN/latest/)
[](https://github.com/open-mmlab/mmediting/actions)
[](https://codecov.io/gh/open-mmlab/mmediting)
-[](https://github.com/open-mmlab/mmediting/blob/1.x/LICENSE)
+[](https://github.com/open-mmlab/mmediting/blob/main/LICENSE)
[](https://github.com/open-mmlab/mmediting/issues)
[](https://github.com/open-mmlab/mmediting/issues)
-[📘使用文档](https://mmediting.readthedocs.io/en/1.x/) |
-[🛠️安装教程](https://mmediting.readthedocs.io/zh_CN/1.x/get_started/install.html) |
-[📊模型库](https://mmediting.readthedocs.io/zh_CN/1.x/model_zoo/overview.html) |
-[🆕更新记录](https://mmediting.readthedocs.io/zh_CN/1.x/changelog.html) |
+[📘使用文档](https://mmediting.readthedocs.io/zh_CN/latest/) |
+[🛠️安装教程](https://mmediting.readthedocs.io/zh_CN/latest/get_started/install.html) |
+[📊模型库](https://mmediting.readthedocs.io/zh_CN/latest/model_zoo/overview.html) |
+[🆕更新记录](https://mmediting.readthedocs.io/zh_CN/latest/changelog.html) |
[🚀进行中的项目](https://github.com/open-mmlab/mmediting/projects) |
[🤔提出问题](https://github.com/open-mmlab/mmediting/issues)
@@ -139,7 +139,7 @@ mim install 'mmcv>=2.0.0'
从源码安装 MMEditing
```
-git clone -b 1.x https://github.com/open-mmlab/mmediting.git
+git clone https://github.com/open-mmlab/mmediting.git
cd mmediting
pip3 install -e .
```
@@ -318,7 +318,7 @@ pip3 install -e .
-请参考[模型库](https://mmediting.readthedocs.io/zh_CN/1.x/model_zoo/overview.html)了解详情。
+请参考[模型库](https://mmediting.readthedocs.io/zh_CN/latest/model_zoo/overview.html)了解详情。
🔝返回顶部
diff --git a/configs/disco_diffusion/README.md b/configs/disco_diffusion/README.md
index dcd9ff98aa..90cb59e11c 100644
--- a/configs/disco_diffusion/README.md
+++ b/configs/disco_diffusion/README.md
@@ -104,7 +104,7 @@ save_image(image, "image.png")
## Tutorials
-Considering that `disco-diffusion` contains many adjustable parameters, we provide users with a [jupyter-notebook](./tutorials.ipynb) / [colab](https://githubtocolab.com/open-mmlab/mmediting/blob/dev-1.x/configs/disco_diffusion/tutorials.ipynb) tutorial that exhibits the meaning of different parameters, and gives results corresponding to adjustment.
+Considering that `disco-diffusion` contains many adjustable parameters, we provide users with a [jupyter-notebook](./tutorials.ipynb) / [colab](https://githubtocolab.com/open-mmlab/mmediting/blob/main/configs/disco_diffusion/tutorials.ipynb) tutorial that exhibits the meaning of different parameters, and gives results corresponding to adjustment.
Refer to [Disco Sheet](https://docs.google.com/document/d/1l8s7uS2dGqjztYSjPpzlmXLjl5PM3IGkRWI3IiCuK7g/edit).
## Credits
diff --git a/configs/disco_diffusion/README_zh-CN.md b/configs/disco_diffusion/README_zh-CN.md
index f15723f779..51a7f9d99f 100644
--- a/configs/disco_diffusion/README_zh-CN.md
+++ b/configs/disco_diffusion/README_zh-CN.md
@@ -104,7 +104,7 @@ save_image(image, "image.png")
## 教程
-考虑到`disco-diffusion`包含许多可调整的参数,我们为用户提供了一个[jupyter-notebook](./tutorials.ipynb)/[colab](https://githubtocolab.com/open-mmlab/mmediting/blob/dev-1.x/configs/disco_diffusion/tutorials.ipynb)的教程,展示了不同参数的含义,并给出相应的调整结果。
+考虑到`disco-diffusion`包含许多可调整的参数,我们为用户提供了一个[jupyter-notebook](./tutorials.ipynb)/[colab](https://githubtocolab.com/open-mmlab/mmediting/blob/main/configs/disco_diffusion/tutorials.ipynb)的教程,展示了不同参数的含义,并给出相应的调整结果。
请参考[Disco Sheet](https://docs.google.com/document/d/1l8s7uS2dGqjztYSjPpzlmXLjl5PM3IGkRWI3IiCuK7g/edit)。
## 鸣谢
diff --git a/configs/disco_diffusion/tutorials.ipynb b/configs/disco_diffusion/tutorials.ipynb
index 8f0eb8d39c..0f3409f4f3 100644
--- a/configs/disco_diffusion/tutorials.ipynb
+++ b/configs/disco_diffusion/tutorials.ipynb
@@ -76,7 +76,7 @@
"# Install mmediting from source\n",
"%cd /content/\n",
"!rm -rf mmediting\n",
- "!git clone -b dev-1.x https://github.com/open-mmlab/mmediting.git \n",
+ "!git clone https://github.com/open-mmlab/mmediting.git \n",
"%cd mmediting\n",
"!pip install -r requirements.txt\n",
"!pip install -e ."
diff --git a/configs/inst_colorization/README.md b/configs/inst_colorization/README.md
index 3f792aa9a2..691d14c7a6 100644
--- a/configs/inst_colorization/README.md
+++ b/configs/inst_colorization/README.md
@@ -36,7 +36,7 @@ You can use the following commands to colorize an image.
python demo/colorization_demo.py configs/inst_colorization/inst-colorizatioon_full_official_cocostuff-256x256.py https://download.openmmlab.com/mmediting/inst_colorization/inst-colorizatioon_full_official_cocostuff-256x256-5b9d4eee.pth input.jpg output.jpg
```
-For more demos, you can refer to [Tutorial 3: inference with pre-trained models](https://mmediting.readthedocs.io/en/1.x/user_guides/3_inference.html).
+For more demos, you can refer to [Tutorial 3: inference with pre-trained models](https://mmediting.readthedocs.io/en/latest/user_guides/3_inference.html).
diff --git a/configs/inst_colorization/README_zh-CN.md b/configs/inst_colorization/README_zh-CN.md
index 6d8184fa2e..38f9183ce1 100644
--- a/configs/inst_colorization/README_zh-CN.md
+++ b/configs/inst_colorization/README_zh-CN.md
@@ -35,7 +35,7 @@ Image colorization is inherently an ill-posed problem with multi-modal uncertain
python demo/colorization_demo.py configs/inst_colorization/inst-colorizatioon_full_official_cocostuff-256x256.py https://download.openmmlab.com/mmediting/inst_colorization/inst-colorizatioon_full_official_cocostuff-256x256-5b9d4eee.pth input.jpg output.jpg
```
-更多细节可以参考 [Tutorial 3: inference with pre-trained models](https://mmediting.readthedocs.io/en/1.x/user_guides/3_inference.html)。
+更多细节可以参考 [Tutorial 3: inference with pre-trained models](https://mmediting.readthedocs.io/en/latest/user_guides/3_inference.html)。
diff --git a/demo/README.md b/demo/README.md
index 6ce948fd45..fecee98804 100644
--- a/demo/README.md
+++ b/demo/README.md
@@ -1,6 +1,6 @@
# MMEditing Demo
-There are some mmediting demos in this folder. We provide python command line usage here to run these demos and more guidance could also be found in the [documentation](https://mmediting.readthedocs.io/en/dev-1.x/user_guides/3_inference.html)
+There are some mmediting demos in this folder. We provide python command line usage here to run these demos and more guidance could also be found in the [documentation](https://mmediting.readthedocs.io/en/latest/user_guides/3_inference.html)
Table of contents:
diff --git a/demo/mmediting_inference_tutorial.ipynb b/demo/mmediting_inference_tutorial.ipynb
index 72595185fc..31220d7b06 100644
--- a/demo/mmediting_inference_tutorial.ipynb
+++ b/demo/mmediting_inference_tutorial.ipynb
@@ -318,7 +318,7 @@
"\n",
"Next we describe how to perform inference with python code snippets.\n",
"\n",
- "(We also provide command line interface for you to do inference by running mmediting_inference_demo.py. The usage of this interface could be found in [README.md](./README.md) and more guidance could be found in the [documentation](https://mmediting.readthedocs.io/en/dev-1.x/user_guides/3_inference.html#).)\n"
+ "(We also provide command line interface for you to do inference by running mmediting_inference_demo.py. The usage of this interface could be found in [README.md](./README.md) and more guidance could be found in the [documentation](https://mmediting.readthedocs.io/en/latest/user_guides/3_inference.html#).)\n"
]
},
{
diff --git a/docker/Dockerfile b/docker/Dockerfile
index ff3e47905a..72f312eb59 100644
--- a/docker/Dockerfile
+++ b/docker/Dockerfile
@@ -16,7 +16,7 @@ RUN apt-get update && apt-get install -y git ninja-build libglib2.0-0 libsm6 lib
# Install mmediting
RUN conda clean --all
-RUN git clone -b 1.x https://github.com/open-mmlab/mmediting.git /mmediting
+RUN git clone https://github.com/open-mmlab/mmediting.git /mmediting
WORKDIR /mmediting
ENV FORCE_CUDA="1"
RUN pip install openmim
diff --git a/docs/en/changelog.md b/docs/en/changelog.md
index b87b79d863..bbf2235ecb 100644
--- a/docs/en/changelog.md
+++ b/docs/en/changelog.md
@@ -227,4 +227,4 @@ MMEditing 1.0.0rc0 is the first version of MMEditing 1.x, a part of the OpenMMLa
Built upon the new [training engine](https://github.com/open-mmlab/mmengine), MMEditing 1.x unifies the interfaces of dataset, models, evaluation, and visualization.
-And there are some BC-breaking changes. Please check [the migration tutorial](https://mmediting.readthedocs.io/en/1.x/migration/overview.html) for more details.
+And there are some BC-breaking changes. Please check [the migration tutorial](https://mmediting.readthedocs.io/en/latest/migration/overview.html) for more details.
diff --git a/docs/en/community/projects.md b/docs/en/community/projects.md
index 50e25c903f..544d4e5287 100644
--- a/docs/en/community/projects.md
+++ b/docs/en/community/projects.md
@@ -18,11 +18,11 @@ You can copy and create your own project from the [example project](../../../pro
We also provide some documentation listed below for your reference:
-- [Contribution Guide](https://mmediting.readthedocs.io/en/dev-1.x/community/contributing.html)
+- [Contribution Guide](https://mmediting.readthedocs.io/en/latest/community/contributing.html)
The guides for new contributors about how to add your projects to MMEditing.
-- [New Model Guide](https://mmediting.readthedocs.io/en/dev-1.x/howto/models.html)
+- [New Model Guide](https://mmediting.readthedocs.io/en/latest/howto/models.html)
The documentation of adding new models.
diff --git a/docs/en/get_started/install.md b/docs/en/get_started/install.md
index e0d4c7c39d..c826b47cd9 100644
--- a/docs/en/get_started/install.md
+++ b/docs/en/get_started/install.md
@@ -10,7 +10,7 @@ In this section, you will know about:
## Installation
-We recommend that users follow our [Best practices](#best-practices) to install MMEditing 1.x.
+We recommend that users follow our [Best practices](#best-practices) to install MMEditing.
However, the whole process is highly customizable. See [Customize installation](#customize-installation) section for more information.
### Prerequisites
@@ -69,11 +69,11 @@ mim install 'mmcv>=2.0.0'
pip install git+https://github.com/open-mmlab/mmengine.git
```
-**Step 2.** Install MMEditing 1.x .
+**Step 2.** Install MMEditing.
Install [MMEditing](https://github.com/open-mmlab/mmediting) from the source code.
```shell
-git clone -b 1.x https://github.com/open-mmlab/mmediting.git
+git clone https://github.com/open-mmlab/mmediting.git
cd mmediting
pip3 install -e . -v
```
diff --git a/docs/en/howto/dataset.md b/docs/en/howto/dataset.md
index 11086661fe..48e0c00db6 100644
--- a/docs/en/howto/dataset.md
+++ b/docs/en/howto/dataset.md
@@ -16,7 +16,7 @@ In this document, we will introduce the design of each datasets in MMEditing and
## Supported Data Format
-In 1.x version of MMEditing, all datasets are inherited from `BaseDataset`.
+In MMEditing, all datasets are inherited from `BaseDataset`.
Each dataset load the list of data info (e.g., data path) by `load_data_list`.
In `__getitem__`, `prepare_data` is called to get the preprocessed data.
In `prepare_data`, data loading pipeline consists of the following steps:
diff --git a/docs/en/howto/models.md b/docs/en/howto/models.md
index 1d9c5828e9..79cd504cf7 100644
--- a/docs/en/howto/models.md
+++ b/docs/en/howto/models.md
@@ -23,16 +23,16 @@ In MMEditing, one algorithm can be splited two compents: **Model** and **Module*
- **Model** are topmost wrappers and always inherint from `BaseModel` provided in MMEngine. **Model** is responsible to network forward, loss calculation and backward, parameters updating, etc. In MMEditing, **Model** should be registered as `MODELS`.
- **Module** includes the neural network **architectures** to train or inference, pre-defined **loss classes**, and **data preprocessors** to preprocess the input data batch. **Module** always present as elements of **Model**. In MMEditing, **Module** should be registered as **MODULES**.
-Take DCGAN model as an example, [generator](https://github.com/open-mmlab/mmediting/blob/1.x/mmedit/models/editors/dcgan/dcgan_generator.py) and [discriminator](https://github.com/open-mmlab/mmediting/blob/1.x/mmedit/models/editors/dcgan/dcgan_discriminator.py) are the **Module**, which generate images and discriminate real or fake images. [`DCGAN`](https://github.com/open-mmlab/mmediting/blob/1.x/mmedit/models/editors/dcgan/dcgan.py) is the **Model**, which take data from dataloader and train generator and discriminator alternatively.
+Take DCGAN model as an example, [generator](https://github.com/open-mmlab/mmediting/blob/main/mmedit/models/editors/dcgan/dcgan_generator.py) and [discriminator](https://github.com/open-mmlab/mmediting/blob/main/mmedit/models/editors/dcgan/dcgan_discriminator.py) are the **Module**, which generate images and discriminate real or fake images. [`DCGAN`](https://github.com/open-mmlab/mmediting/blob/main/mmedit/models/editors/dcgan/dcgan.py) is the **Model**, which take data from dataloader and train generator and discriminator alternatively.
You can find the implementation of **Model** and **Module** by the following link.
- **Model**:
- - [Editors](https://github.com/open-mmlab/mmediting/tree/1.x/mmedit/models/editors)
+ - [Editors](https://github.com/open-mmlab/mmediting/tree/main/mmedit/models/editors)
- **Module**:
- - [Layers](https://github.com/open-mmlab/mmediting/tree/1.x/mmedit/models/layers)
- - [Losses](https://github.com/open-mmlab/mmediting/tree/1.x/mmedit/models/losses)
- - [Data Preprocessor](https://github.com/open-mmlab/mmediting/tree/1.x/mmedit/models/data_preprocessors)
+ - [Layers](https://github.com/open-mmlab/mmediting/tree/main/mmedit/models/layers)
+ - [Losses](https://github.com/open-mmlab/mmediting/tree/main/mmedit/models/losses)
+ - [Data Preprocessor](https://github.com/open-mmlab/mmediting/tree/main/mmedit/models/data_preprocessors)
## An example of SRCNN
@@ -552,7 +552,7 @@ If you want to implement specific weights initialization method for you network,
After the implementation of class `DCGANGenerator`, we need to update the model list in `mmedit/models/editors/__init__.py`, so that we can import and use class `DCGANGenerator` by `mmedit.models.editors`.
-Implementation of Class `DCGANDiscriminator` follows the similar logic, and you can find the implementation [here](https://github.com/open-mmlab/mmediting/blob/1.x/mmedit/models/editors/dcgan/dcgan_discriminator.py).
+Implementation of Class `DCGANDiscriminator` follows the similar logic, and you can find the implementation [here](https://github.com/open-mmlab/mmediting/blob/main/mmedit/models/editors/dcgan/dcgan_discriminator.py).
### Step 2: Design the model of DCGAN
@@ -561,14 +561,14 @@ After the implementation of the network **Module**, we need to define our **Mode
Your **Model** should inherit from [`BaseModel`](https://github.com/open-mmlab/mmengine/blob/main/mmengine/model/base_model/base_model.py#L16) provided by MMEngine and implement three functions, `train_step`, `val_step` and `test_step`.
- `train_step`: This function is responsible to update the parameters of the network and called by MMEngine's Loop ([`IterBasedTrainLoop`](https://github.com/open-mmlab/mmengine/blob/main/mmengine/runner/loops.py#L183) or [`EpochBasedTrainLoop`](https://github.com/open-mmlab/mmengine/blob/main/mmengine/runner/loops.py#L18)). `train_step` take data batch and [`OptimWrapper`](https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/optim_wrapper.md) as input and return a dict of log.
-- `val_step`: This function is responsible for getting output for validation during the training process. and is called by [`GenValLoop`](https://github.com/open-mmlab/mmediting/blob/1.x/mmedit/engine/runner/loops.py#L12).
-- `test_step`: This function is responsible for getting output in test process and is called by [`GenTestLoop`](https://github.com/open-mmlab/mmediting/blob/1.x/mmedit/engine/runner/loops.py#L95).
+- `val_step`: This function is responsible for getting output for validation during the training process. and is called by [`GenValLoop`](https://github.com/open-mmlab/mmediting/blob/main/mmedit/engine/runner/loops.py#L12).
+- `test_step`: This function is responsible for getting output in test process and is called by [`GenTestLoop`](https://github.com/open-mmlab/mmediting/blob/main/mmedit/engine/runner/loops.py#L95).
-> Note that, in `train_step`, `val_step` and `test_step`, `DataPreprocessor` is called to preprocess the input data batch before feed them to the neural network. To know more about `DataPreprocessor` please refer to this [file](https://github.com/open-mmlab/mmediting/blob/1.x/mmedit/models/data_preprocessors/gen_preprocessor.py) and this [tutorial](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/tutorials/model.md#%E6%95%B0%E6%8D%AE%E5%A4%84%E7%90%86%E5%99%A8datapreprocessor).
+> Note that, in `train_step`, `val_step` and `test_step`, `DataPreprocessor` is called to preprocess the input data batch before feed them to the neural network. To know more about `DataPreprocessor` please refer to this [file](https://github.com/open-mmlab/mmediting/blob/main/mmedit/models/data_preprocessors/gen_preprocessor.py) and this [tutorial](https://github.com/open-mmlab/mmengine/blob/main/docs/zh_cn/tutorials/model.md#%E6%95%B0%E6%8D%AE%E5%A4%84%E7%90%86%E5%99%A8datapreprocessor).
-For simplify using, we provide [`BaseGAN`](https://github.com/open-mmlab/mmediting/blob/1.x/mmedit/models/base_models/base_gan.py) class in MMEditing, which implements generic `train_step`, `val_step` and `test_step` function for GAN models. With `BaseGAN` as base class, each specific GAN algorithm only need to implement `train_generator` and `train_discriminator`.
+For simplify using, we provide [`BaseGAN`](https://github.com/open-mmlab/mmediting/blob/main/mmedit/models/base_models/base_gan.py) class in MMEditing, which implements generic `train_step`, `val_step` and `test_step` function for GAN models. With `BaseGAN` as base class, each specific GAN algorithm only need to implement `train_generator` and `train_discriminator`.
-In `train_step`, we support data preprocessing, gradient accumulation (realized by [`OptimWrapper`](https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/optim_wrapper.md)) and expontial moving averate (EMA) realized by [(`ExponentialMovingAverage`)](https://github.com/open-mmlab/mmediting/blob/1.x/mmedit/models/base_models/average_model.py#L19). With `BaseGAN.train_step`, each specific GAN algorithm only need to implement `train_generator` and `train_discriminator`.
+In `train_step`, we support data preprocessing, gradient accumulation (realized by [`OptimWrapper`](https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/optim_wrapper.md)) and expontial moving averate (EMA) realized by [(`ExponentialMovingAverage`)](https://github.com/open-mmlab/mmediting/blob/main/mmedit/models/base_models/average_model.py#L19). With `BaseGAN.train_step`, each specific GAN algorithm only need to implement `train_generator` and `train_discriminator`.
```python
def train_step(self, data: dict,
diff --git a/docs/en/howto/transforms.md b/docs/en/howto/transforms.md
index 8701e28476..fbcfcdc900 100644
--- a/docs/en/howto/transforms.md
+++ b/docs/en/howto/transforms.md
@@ -27,7 +27,7 @@ A pipeline consists of a sequence of operations. Each operation takes a dict as
The operations are categorized into data loading, pre-processing, and formatting
-In 1.x version of MMEditing, all data transformations are inherited from `BaseTransform`.
+In MMEditing, all data transformations are inherited from `BaseTransform`.
The input and output types of transformations are both dict.
### A simple example of data transform
diff --git a/docs/en/switch_language.md b/docs/en/switch_language.md
index 7490e18343..4b4ffc7c9b 100644
--- a/docs/en/switch_language.md
+++ b/docs/en/switch_language.md
@@ -1,3 +1,3 @@
-# English
+# English
-# 简体中文
+# 简体中文
diff --git a/docs/en/user_guides/config.md b/docs/en/user_guides/config.md
index 72b0d6c734..a4b7d51603 100644
--- a/docs/en/user_guides/config.md
+++ b/docs/en/user_guides/config.md
@@ -77,7 +77,7 @@ Please refer to [MMEngine](https://github.com/open-mmlab/mmengine/blob/main/docs
## An example of EDSR
To help the users have a basic idea of a complete config,
-we make a brief comments on the [config of the EDSR model](https://github.com/open-mmlab/mmediting/blob/1.x/configs/edsr/edsr_x2c64b16_g1_300k_div2k.py) we implemented as the following.
+we make a brief comments on the [config of the EDSR model](https://github.com/open-mmlab/mmediting/blob/main/configs/edsr/edsr_x2c64b16_g1_300k_div2k.py) we implemented as the following.
For more detailed usage and the corresponding alternative for each modules,
please refer to the API documentation and the [tutorial in MMEngine](https://github.com/open-mmlab/mmengine/blob/main/docs/en/tutorials/config.md).
@@ -285,7 +285,7 @@ resume = False # Resume checkpoints from a given path, the training will be res
## An example of StyleGAN2
-Taking [Stylegan2 at 1024x1024 scale](https://github.com/open-mmlab/mmediting/blob/1.x/configs//styleganv2/stylegan2_c2_8xb4-fp16-global-800kiters_quicktest-ffhq-256x256.py) as an example,
+Taking [Stylegan2 at 1024x1024 scale](https://github.com/open-mmlab/mmediting/blob/main/configs//styleganv2/stylegan2_c2_8xb4-fp16-global-800kiters_quicktest-ffhq-256x256.py) as an example,
we introduce each field in the config according to different function modules.
### Model config
@@ -416,7 +416,7 @@ optim_wrapper = dict(
`param_scheduler` is a field that configures methods of adjusting optimization hyperparameters such as learning rate and momentum.
Users can combine multiple schedulers to create a desired parameter adjustment strategy.
Find more in [parameter scheduler tutorial](https://mmengine.readthedocs.io/en/latest/tutorials/param_scheduler.html).
-Since StyleGAN2 do not use parameter scheduler, we use config in [CycleGAN](https://github.com/open-mmlab/mmediting/blob/1.x/configs/cyclegan/cyclegan_lsgan-id0-resnet-in_1xb1-250kiters_summer2winter.py) as an example:
+Since StyleGAN2 do not use parameter scheduler, we use config in [CycleGAN](https://github.com/open-mmlab/mmediting/blob/main/configs/cyclegan/cyclegan_lsgan-id0-resnet-in_1xb1-250kiters_summer2winter.py) as an example:
```python
# parameter scheduler in CycleGAN config
diff --git a/docs/en/user_guides/dataset_prepare.md b/docs/en/user_guides/dataset_prepare.md
index b26ae391cf..b33c3f6640 100644
--- a/docs/en/user_guides/dataset_prepare.md
+++ b/docs/en/user_guides/dataset_prepare.md
@@ -23,7 +23,7 @@ For example, you can simply prepare Vimeo90K-triplet datasets by downloading dat
## Prepare datasets
-Some datasets need to be preprocessed before training or testing. We support many scripts to prepare datasets in [tools/dataset_converters](https://github.com/open-mmlab/mmediting/tree/1.x/tools/dataset_converters). And you can follow the tutorials of every dataset to run scripts.
+Some datasets need to be preprocessed before training or testing. We support many scripts to prepare datasets in [tools/dataset_converters](https://github.com/open-mmlab/mmediting/tree/main/tools/dataset_converters). And you can follow the tutorials of every dataset to run scripts.
For example, we recommend cropping the DIV2K images to sub-images. We provide a script to prepare cropped DIV2K dataset. You can run the following command:
```shell
diff --git a/docs/en/user_guides/deploy.md b/docs/en/user_guides/deploy.md
index ce2ff00274..1b80a5eb55 100644
--- a/docs/en/user_guides/deploy.md
+++ b/docs/en/user_guides/deploy.md
@@ -1,7 +1,7 @@
# Tutorial 8: Deploy models in MMEditing
The deployment of OpenMMLab codebases, including MMClassification, MMDetection, MMEditing and so on are supported by [MMDeploy](https://github.com/open-mmlab/mmdeploy).
-The latest deployment guide for MMEditing can be found from [here](https://mmdeploy.readthedocs.io/en/1.x/04-supported-codebases/mmedit.html).
+The latest deployment guide for MMEditing can be found from [here](https://mmdeploy.readthedocs.io/en/latest/04-supported-codebases/mmedit.html).
This tutorial is organized as follows:
@@ -15,7 +15,7 @@ This tutorial is organized as follows:
## Installation
-Please follow the [guide](../get_started/install.md) to install mmedit. And then install mmdeploy from source by following [this](https://mmdeploy.readthedocs.io/en/1.x/get_started.html#installation) guide.
+Please follow the [guide](../get_started/install.md) to install mmedit. And then install mmdeploy from source by following [this](https://mmdeploy.readthedocs.io/en/latest/get_started.html#installation) guide.
```{note}
If you install mmdeploy prebuilt package, please also clone its repository by 'git clone https://github.com/open-mmlab/mmdeploy.git --depth=1' to get the deployment config files.
@@ -48,7 +48,7 @@ torch2onnx(img, work_dir, save_file, deploy_cfg, model_cfg,
export2SDK(deploy_cfg, model_cfg, work_dir, pth=model_checkpoint, device=device)
```
-It is crucial to specify the correct deployment config during model conversion.MMDeploy has already provided builtin deployment config [files](https://github.com/open-mmlab/mmdeploy/tree/1.x/configs/mmedit) of all supported backends for mmedit, under which the config file path follows the pattern:
+It is crucial to specify the correct deployment config during model conversion.MMDeploy has already provided builtin deployment config [files](https://github.com/open-mmlab/mmdeploy/tree/main/configs/mmedit) of all supported backends for mmedit, under which the config file path follows the pattern:
```
{task}/{task}_{backend}-{precision}_{static | dynamic}_{shape}.py
@@ -151,8 +151,8 @@ result = restorer(img)
cv2.imwrite('output_restorer.bmp', result)
```
-Besides python API, MMDeploy SDK also provides other FFI (Foreign Function Interface), such as C, C++, C#, Java and so on. You can learn their usage from [demos](https://github.com/open-mmlab/mmdeploy/tree/1.x/demo).
+Besides python API, MMDeploy SDK also provides other FFI (Foreign Function Interface), such as C, C++, C#, Java and so on. You can learn their usage from [demos](https://github.com/open-mmlab/mmdeploy/tree/main/demo).
## Supported models
-Please refer to [here](https://mmdeploy.readthedocs.io/en/1.x/04-supported-codebases/mmedit.html#supported-models) for the supported model list.
+Please refer to [here](https://mmdeploy.readthedocs.io/en/latest/04-supported-codebases/mmedit.html#supported-models) for the supported model list.
diff --git a/docs/en/user_guides/inference.md b/docs/en/user_guides/inference.md
index cc71aaf608..c2a87f4df1 100644
--- a/docs/en/user_guides/inference.md
+++ b/docs/en/user_guides/inference.md
@@ -113,7 +113,7 @@ model = init_model(config_file, checkpoint_file, device=device)
fake_imgs = sample_ddpm_model(model, 4)
```
-Indeed, we have already provided a more friendly demo script to users. You can use [demo/ddpm_demo.py](https://github.com/open-mmlab/mmediting/blob/1.x/demo/ddpm_demo.py) with the following commands:
+Indeed, we have already provided a more friendly demo script to users. You can use [demo/ddpm_demo.py](https://github.com/open-mmlab/mmediting/blob/main/demo/ddpm_demo.py) with the following commands:
```shell
python demo/ddpm_demo.py \
diff --git a/docs/en/user_guides/metrics.md b/docs/en/user_guides/metrics.md
index 500bd4f07e..5e04a8e4a8 100644
--- a/docs/en/user_guides/metrics.md
+++ b/docs/en/user_guides/metrics.md
@@ -141,7 +141,7 @@ val_evaluator = [
Fréchet Inception Distance is a measure of similarity between two datasets of images. It was shown to correlate well with the human judgment of visual quality and is most often used to evaluate the quality of samples of Generative Adversarial Networks. FID is calculated by computing the Fréchet distance between two Gaussians fitted to feature representations of the Inception network.
-In `MMEditing`, we provide two versions for FID calculation. One is the commonly used PyTorch version and the other one is used in StyleGAN paper. Meanwhile, we have compared the difference between these two implementations in the StyleGAN2-FFHQ1024 model (the details can be found [here](https://github.com/open-mmlab/mmediting/blob/1.x/configs/styleganv2/README.md)). Fortunately, there is a marginal difference in the final results. Thus, we recommend users adopt the more convenient PyTorch version.
+In `MMEditing`, we provide two versions for FID calculation. One is the commonly used PyTorch version and the other one is used in StyleGAN paper. Meanwhile, we have compared the difference between these two implementations in the StyleGAN2-FFHQ1024 model (the details can be found [here](https://github.com/open-mmlab/mmediting/blob/main/configs/styleganv2/README.md)). Fortunately, there is a marginal difference in the final results. Thus, we recommend users adopt the more convenient PyTorch version.
**About PyTorch version and Tero's version:** The commonly used PyTorch version adopts the modified InceptionV3 network to extract features for real and fake images. However, Tero's FID requires a [script module](https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/inception-2015-12-05.pt) for Tensorflow InceptionV3. Note that applying this script module needs `PyTorch >= 1.6.0`.
@@ -227,7 +227,7 @@ to [evaluation](../user_guides/train_test.md) for details.
## Precision and Recall
-Our `Precision and Recall` implementation follows the version used in StyleGAN2. In this metric, a VGG network will be adopted to extract the features for images. Unfortunately, we have not found a PyTorch VGG implementation leading to similar results with Tero's version used in StyleGAN2. (About the differences, please see this [file](https://github.com/open-mmlab/mmediting/blob/1.x/configs/styleganv2/README.md).) Thus, in our implementation, we adopt [Teor's VGG](https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/vgg16.pt) network by default. Importantly, applying this script module needs `PyTorch >= 1.6.0`. If with a lower PyTorch version, we will use the PyTorch official VGG network for feature extraction.
+Our `Precision and Recall` implementation follows the version used in StyleGAN2. In this metric, a VGG network will be adopted to extract the features for images. Unfortunately, we have not found a PyTorch VGG implementation leading to similar results with Tero's version used in StyleGAN2. (About the differences, please see this [file](https://github.com/open-mmlab/mmediting/blob/main/configs/styleganv2/README.md).) Thus, in our implementation, we adopt [Teor's VGG](https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/vgg16.pt) network by default. Importantly, applying this script module needs `PyTorch >= 1.6.0`. If with a lower PyTorch version, we will use the PyTorch official VGG network for feature extraction.
To evaluate with `P&R`, please add the following configuration in the config file:
diff --git a/docs/en/user_guides/train_test.md b/docs/en/user_guides/train_test.md
index ecc60fe58f..6b1d9cb55a 100644
--- a/docs/en/user_guides/train_test.md
+++ b/docs/en/user_guides/train_test.md
@@ -72,7 +72,7 @@ You can check [slurm_test.sh](../../../tools/slurm_test.sh) for full arguments a
### Test with specific metrics
MMEditing provides various **evaluation metrics**, i.e., MS-SSIM, SWD, IS, FID, Precision&Recall, PPL, Equivarience, TransFID, TransIS, etc.
-We have provided unified evaluation scripts in [tools/test.py](https://github.com/open-mmlab/mmediting/tree/1.x/tools/test.py) for all models.
+We have provided unified evaluation scripts in [tools/test.py](https://github.com/open-mmlab/mmediting/tree/main/tools/test.py) for all models.
If users want to evaluate their models with some metrics, you can add the `metrics` into your config file like this:
```python
@@ -102,7 +102,7 @@ Then users can test models with the command below:
bash tools/dist_test.sh ${CONFIG_FILE} ${CKPT_FILE}
```
-If you are in slurm environment, please switch to the [tools/slurm_test.sh](https://github.com/open-mmlab/mmediting/tree/1.x/tools/slurm_test.sh) by using the following commands:
+If you are in slurm environment, please switch to the [tools/slurm_test.sh](https://github.com/open-mmlab/mmediting/tree/main/tools/slurm_test.sh) by using the following commands:
```shell
sh slurm_test.sh ${PLATFORM} ${JOBNAME} ${CONFIG_FILE} ${CKPT_FILE}
diff --git a/docs/zh_cn/.dev_scripts/update_model_zoo.py b/docs/zh_cn/.dev_scripts/update_model_zoo.py
index c780bb314e..7940cf4b62 100755
--- a/docs/zh_cn/.dev_scripts/update_model_zoo.py
+++ b/docs/zh_cn/.dev_scripts/update_model_zoo.py
@@ -12,7 +12,7 @@
import titlecase
from tqdm import tqdm
-github_link = 'https://github.com/open-mmlab/mmediting/blob/1.x/'
+github_link = 'https://github.com/open-mmlab/mmediting/blob/main/'
def anchor(name):
diff --git a/docs/zh_cn/changelog.md b/docs/zh_cn/changelog.md
index 315591008a..8c16e2f9bd 100644
--- a/docs/zh_cn/changelog.md
+++ b/docs/zh_cn/changelog.md
@@ -186,4 +186,4 @@ MMEditing 1.0.0rc0 是 MMEditing 1.x 的第一个版本,是 OpenMMLab 2.0 项
基于新的[训练引擎](https://github.com/open-mmlab/mmengine), MMEditing 1.x 统一了数据、模型、评测和可视化的接口。
-该版本存在有一些 BC-breaking 的修改。 请在[迁移指南](https://mmediting.readthedocs.io/zh_CN/1.x/migration/overview.html)中查看更多细节。
+该版本存在有一些 BC-breaking 的修改。 请在[迁移指南](https://mmediting.readthedocs.io/zh_CN/latest/migration/overview.html)中查看更多细节。
diff --git a/docs/zh_cn/switch_language.md b/docs/zh_cn/switch_language.md
index 5fc8dc67b1..5f17de764a 100644
--- a/docs/zh_cn/switch_language.md
+++ b/docs/zh_cn/switch_language.md
@@ -1,3 +1,3 @@
-## English
+## English
-## 简体中文
+## 简体中文
diff --git a/docs/zh_cn/user_guides/dataset_prepare.md b/docs/zh_cn/user_guides/dataset_prepare.md
index c263aa8d37..234378b0ad 100644
--- a/docs/zh_cn/user_guides/dataset_prepare.md
+++ b/docs/zh_cn/user_guides/dataset_prepare.md
@@ -24,7 +24,7 @@
## 准备数据集
一些数据集需要在训练或测试之前进行预处理。我们在
-[tools/dataset_converters](https://github.com/open-mmlab/mmediting/tree/1.x/tools/dataset_converters)中支持许多用来准备数据集的脚本。
+[tools/dataset_converters](https://github.com/open-mmlab/mmediting/tree/main/tools/dataset_converters)中支持许多用来准备数据集的脚本。
您可以遵循每个数据集的教程来运行脚本。例如,我们建议将DIV2K图像裁剪为子图像。我们提供了一个脚本来准备裁剪的DIV2K数据集。可以运行以下命令:
```shell
diff --git a/docs/zh_cn/user_guides/deploy.md b/docs/zh_cn/user_guides/deploy.md
index d5ac375884..77be5c0fe0 100644
--- a/docs/zh_cn/user_guides/deploy.md
+++ b/docs/zh_cn/user_guides/deploy.md
@@ -1,7 +1,7 @@
# 教程 8:模型部署指南
[MMDeploy](https://github.com/open-mmlab/mmdeploy) 是 OpenMMLab 的部署仓库,负责包括 MMClassification、MMDetection、MMEditing 等在内的各算法库的部署工作。
-你可以从[这里](https://mmdeploy.readthedocs.io/zh_CN/1.x/04-supported-codebases/mmedit.html)获取 MMDeploy 对 MMClassification 部署支持的最新文档。
+你可以从[这里](https://mmdeploy.readthedocs.io/zh_CN/latest/04-supported-codebases/mmedit.html)获取 MMDeploy 对 MMClassification 部署支持的最新文档。
本文的结构如下:
@@ -15,7 +15,7 @@
## 安装
-请参考[此处](../get_started/install.md)安装 mmedit。然后,按照[说明](https://mmdeploy.readthedocs.io/zh_CN/1.x/get_started.html#mmdeploy)安装 mmdeploy。
+请参考[此处](../get_started/install.md)安装 mmedit。然后,按照[说明](https://mmdeploy.readthedocs.io/zh_CN/latest/get_started.html#mmdeploy)安装 mmdeploy。
```{note}
如果安装的是 mmdeploy 预编译包,那么也请通过 'git clone https://github.com/open-mmlab/mmdeploy.git --depth=1' 下载 mmdeploy 源码。因为它包含了部署时要用到的配置文件
@@ -45,7 +45,7 @@ torch2onnx(img, work_dir, save_file, deploy_cfg, model_cfg,
export2SDK(deploy_cfg, model_cfg, work_dir, pth=model_checkpoint, device=device)
```
-转换的关键之一是使用正确的配置文件。项目中已内置了各后端部署[配置文件](https://github.com/open-mmlab/mmdeploy/tree/1.x/configs/mmedit)。
+转换的关键之一是使用正确的配置文件。项目中已内置了各后端部署[配置文件](https://github.com/open-mmlab/mmdeploy/tree/main/configs/mmedit)。
文件的命名模式是:
```
@@ -148,8 +148,8 @@ cv2.imwrite('output_restorer.bmp', result)
```
除了python API,mmdeploy SDK 还提供了诸如 C、C++、C#、Java等多语言接口。
-你可以参考[样例](https://github.com/open-mmlab/mmdeploy/tree/1.x/demo)学习其他语言接口的使用方法。
+你可以参考[样例](https://github.com/open-mmlab/mmdeploy/tree/main/demo)学习其他语言接口的使用方法。
## 模型支持列表
-请参考[这里](https://mmdeploy.readthedocs.io/zh_CN/1.x/04-supported-codebases/mmedit.html#id7)
+请参考[这里](https://mmdeploy.readthedocs.io/zh_CN/latest/04-supported-codebases/mmedit.html#id7)
diff --git a/docs/zh_cn/user_guides/metrics.md b/docs/zh_cn/user_guides/metrics.md
index 11620db4bf..30b1ce5eb0 100644
--- a/docs/zh_cn/user_guides/metrics.md
+++ b/docs/zh_cn/user_guides/metrics.md
@@ -140,7 +140,7 @@ val_evaluator = [
Fréchet初始距离是两个图像数据集之间相似度的度量。它被证明与人类对视觉质量的判断有很好的相关性,最常用于评估生成对抗网络样本的质量。FID是通过计算两个高斯函数之间的Fréchet距离来计算的,这些高斯函数适合于Inception网络的特征表示。
-在`MMEditing`中,我们提供了两个版本的FID计算。一个是常用的PyTorch版本,另一个用于StyleGAN。同时,我们在StyleGAN2-FFHQ1024模型中比较了这两种实现之间的差异(详细信息可以在这里找到\[https://github.com/open-mmlab/mmediting/blob/1.x/configs/styleganv2/README.md\])。幸运的是,最终结果只是略有不同。因此,我们建议用户采用更方便的PyTorch版本。
+在`MMEditing`中,我们提供了两个版本的FID计算。一个是常用的PyTorch版本,另一个用于StyleGAN。同时,我们在StyleGAN2-FFHQ1024模型中比较了这两种实现之间的差异(详细信息可以在这里找到\[https://github.com/open-mmlab/mmediting/blob/main/configs/styleganv2/README.md\])。幸运的是,最终结果只是略有不同。因此,我们建议用户采用更方便的PyTorch版本。
**关于PyTorch版本和Tero版本:** 常用的PyTorch版本采用修改后的InceptionV3网络提取真假图像特征。然而,Tero的FID需要Tensorflow InceptionV3的[脚本模块](https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/inception-2015-12-05.pt)。注意,应用此脚本模块需要' PyTorch >= 1.6.0 '。
@@ -224,7 +224,7 @@ metrics = [
## Precision and Recall
-我们的'Precision and Recall'实现遵循StyleGAN2中使用的版本。在该度量中,采用VGG网络对图像进行特征提取。不幸的是,我们还没有发现PyTorch VGG实现与StyleGAN2中使用的Tero版本产生类似的结果。(关于差异,请参阅这个[文件](https://github.com/open-mmlab/mmediting/blob/1.x/configs/styleganv2/README.md)。)因此,在我们的实现中,我们默认采用[Teor's VGG](https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/vgg16.pt)网络。需要注意的是,应用这个脚本模块需要'PyTorch >= 1.6.0'。如果使用较低的PyTorch版本,我们将使用PyTorch官方VGG网络进行特征提取。
+我们的'Precision and Recall'实现遵循StyleGAN2中使用的版本。在该度量中,采用VGG网络对图像进行特征提取。不幸的是,我们还没有发现PyTorch VGG实现与StyleGAN2中使用的Tero版本产生类似的结果。(关于差异,请参阅这个[文件](https://github.com/open-mmlab/mmediting/blob/main/configs/styleganv2/README.md)。)因此,在我们的实现中,我们默认采用[Teor's VGG](https://nvlabs-fi-cdn.nvidia.com/stylegan2-ada-pytorch/pretrained/metrics/vgg16.pt)网络。需要注意的是,应用这个脚本模块需要'PyTorch >= 1.6.0'。如果使用较低的PyTorch版本,我们将使用PyTorch官方VGG网络进行特征提取。
要使用' P&R '进行评估,请在配置文件中添加以下配置:
```python
diff --git a/mmedit/datasets/basic_conditional_dataset.py b/mmedit/datasets/basic_conditional_dataset.py
index 5e61c41223..562f9f3f17 100644
--- a/mmedit/datasets/basic_conditional_dataset.py
+++ b/mmedit/datasets/basic_conditional_dataset.py
@@ -15,9 +15,9 @@
class BasicConditionalDataset(BaseDataset):
"""Custom dataset for conditional GAN. This class is based on the
combination of `BaseDataset` (https://github.com/open-
- mmlab/mmclassification/blob/1.x/mmcls/datasets/base_dataset.py) # noqa and
- `CustomDataset` (https://github.com/open-
- mmlab/mmclassification/blob/1.x/mmcls/datasets/custom.py). # noqa.
+ mmlab/mmclassification/blob/main/mmcls/datasets/base_dataset.py) # noqa
+ and `CustomDataset` (https://github.com/open-
+ mmlab/mmclassification/blob/main/mmcls/datasets/custom.py). # noqa.
The dataset supports two kinds of annotation format.
diff --git a/projects/README.md b/projects/README.md
index f6e6c0ca9e..f0da689ed8 100644
--- a/projects/README.md
+++ b/projects/README.md
@@ -18,11 +18,11 @@ You can copy and create your own project from the [example project](./example_pr
We also provide some documentation listed below for your reference:
-- [Contribution Guide](https://mmediting.readthedocs.io/en/dev-1.x/community/contributing.html)
+- [Contribution Guide](https://mmediting.readthedocs.io/en/latest/community/contributing.html)
The guides for new contributors about how to add your projects to MMEditing.
-- [New Model Guide](https://mmediting.readthedocs.io/en/dev-1.x/howto/models.html)
+- [New Model Guide](https://mmediting.readthedocs.io/en/latest/howto/models.html)
The documentation of adding new models.
diff --git a/projects/example_project/README.md b/projects/example_project/README.md
index 7675bfb700..06cd1e561d 100644
--- a/projects/example_project/README.md
+++ b/projects/example_project/README.md
@@ -18,7 +18,7 @@ This is an implementation of \[XXX\].
### Setup Environment \[required\]
-Please refer to [Get Started](https://mmediting.readthedocs.io/en/1.x/get_started/I.html) to install
+Please refer to [Get Started](https://mmediting.readthedocs.io/en/latest/get_started/I.html) to install
MMEditing.
At first, add the current folder to `PYTHONPATH`, so that Python can find your code. Run command in the current directory to add it.
@@ -31,7 +31,7 @@ export PYTHONPATH=`pwd`:$PYTHONPATH
### Data Preparation \[optional\]
-Prepare the ImageNet-2012 dataset according to the [instruction](https://mmediting.readthedocs.io/en/dev-1.x/user_guides/dataset_prepare.html#imagenet).
+Prepare the ImageNet-2012 dataset according to the [instruction](https://mmediting.readthedocs.io/en/latest/user_guides/dataset_prepare.html#imagenet).
### Training commands \[optional\]
@@ -129,7 +129,7 @@ to MMediting projects.
- [ ] Unit tests
-
+
- [ ] Code style
@@ -137,4 +137,4 @@ to MMediting projects.
- [ ] `metafile.yml` and `README.md`
-
+
diff --git a/tools/gui/README.md b/tools/gui/README.md
index 6de87446a7..483ad8e1da 100644
--- a/tools/gui/README.md
+++ b/tools/gui/README.md
@@ -58,13 +58,7 @@ pip install opencv-python-headless
Install MMEditing.
```shell
-git clone -b 1.x https://github.com/open-mmlab/mmediting.git
-```
-
-If you want to follow the newest features, you can clone `dev-1.x` branch.
-
-```shell
-git clone -b dev-1.x https://github.com/open-mmlab/mmediting.git
+git clone https://github.com/open-mmlab/mmediting.git
```
**Step 3.**