This repository is the code that needs to be submitted for OpenMMLab Algorithm Ecological Challenge,the paper is RepVGG: Making VGG-style ConvNets Great Again
####TODO checkout一个新分支来更新RingAllReduce分布式训练代码
- RepVGG-b2g4 top1 is 79.38
- RepVGG-b3g4 top1 is 80.22
- RepVGG-B3 top1 is 80.52
MMCV is a foundational library for computer vision research and supports many research projects as below:
- MMClassification:OpenMMLab image classification toolbox and benchmark.
- MMDetection:OpenMMLab detection toolbox and benchmark.
- MMDetection3D:OpenMMLab’s next-generation platform for general 3D object detection.
- MMSegmentation:OpenMMLab semantic segmentation toolbox and benchmark.
- MMAction2:OpenMMLab’s next-generation action understanding toolbox and benchmark.
- MMTracking:OpenMMLab video perception toolbox and benchmark.
- ...
If pytorch is not installed, can try conda :)
# Create a conda virtual environment and activate it
> conda create -n open-mmlab python=3.7 -y
> conda activate open-mmlab
# If you have CUDA 10.1 installed under /usr/local/cuda and would like to install PyTorch 1.5,you need to install the prebuilt PyTorch with CUDA 10.1
> conda install pytorch cudatoolkit=10.1 torchvision -c pytorch
and install MMClassification repository now:
- Install MMCV using MIM
> pip install git+https://github.com/open-mmlab/mim.git
> mim install mmcv-full
- clone MMClassification and install
> git clone https://github.com/open-mmlab/mmclassification.git
> cd mmclassification
> pip install -e .
- Register RepVGG in MMclassification
> cp RepVGG-openMMLab/backbones/RepVGG.py mmclassification/mmcls/models/backbones/
in mmclassification/mmcls/models/backbones/init.py
...
from .RepVGG import RepVGG
__all__ = [
..., 'RepVGG'
]
- copy config file to mmclassification/config
> cp RepVGG-openMMLab/config/repvggb2g4_b32x8.py mmclassification/config/
- Train Model(If you downloaded Imagenet)
> cd mmclassification
# single-gpu
> python tools/train.py config/repvggb2g4_b32x8.py [optional arguments]
# multi-gpu
> ./tools/dist_train.sh config/repvggb2g4_b32x8.py 8 [optional arguments]
# Optional arguments are:
# --no-validate (not suggested): By default, the codebase will perform evaluation at every k (default value is 1) epochs during the training. To disable this behavior, use --no-validate.
# --work-dir ${WORK_DIR}: Override the working directory specified in the config file.
# --resume-from ${CHECKPOINT_FILE}: Resume from a previous checkpoint file.
# Difference between resume-from and load-from: resume-from loads both the model weights and optimizer status, and the epoch is also inherited from the specified checkpoint. It is usually used for resuming the training process that is interrupted accidentally. load-from only loads the model weights and the training epoch starts from 0. It is usually used for finetuning
It is recommended to symlink the dataset root to $MMCLASSIFICATION/data. If your folder structure is different, you may need to change the corresponding paths in config files
# data/download_imagenet.sh ,this script can automatically build the file structure that Imagenet needs for mmcls
> mkdir -p mmclassification/data
> cp RepVGG-openMMLab/data/download_imagenet.sh mmclassification/data
> cd mmclassification/data
> bash download_imagenet.sh
pre-trained model in Google Drive
-
RepVGGB2g4.pth
-
RepVGGB3g4.pth
-
RepVGGB3.pth
-
RepVGG-B2g4-deploy.pth
-
RepVGG-B3g4-deploy.pth
-
RepVGG-B3-deploy.pth
# in mmclassification
# single-gpu
> python tools/test.py config/repvggb2g4_b32x8.py ${CHECKPOINT_FILE} [--metrics ${METRICS}] [--out ${RESULT_FILE}]
# multi-gpu
> ./tools/dist_test.sh config/repvggb2g4_b32x8.py ${CHECKPOINT_FILE} 8 [--metrics ${METRICS}] [--out ${RESULT_FILE}]
# CHECKPOINT_FILE: checkpoint path
# RESULT_FILE: Filename of the output results. If not specified, the results will not be saved to a file. Support formats include json, yaml and pickle
# METRICS:Items to be evaluated on the results, like accuracy, precision, recall, etc
If you deploy the model, Or test the accuracy of the "deploy model", can use it like this(The result is basically the same,No loss of accuracy)
- Modify the default configuration file
#in config/repvggb2g4_b32x8.py
model = dict( ...
backbone =dict(
...
deploy = True
)
)
- Use the parameter file that has been converted (like RepVGG-B2g4-deploy.pth)
> cd mmclassification/
> python tools/test.py config/repvggb2g4_b32x8.py RepVGG-B2g4-deploy.pth --metrics accuracy
- Why the structure of the RepVGG is different between the training process and the inference process? The answer is here
If you find this project useful in your research, please consider cite:
@misc{mmcv,
title={{MMCV: OpenMMLab} Computer Vision Foundation},
author={MMCV Contributors},
howpublished = {\url{https://github.com/open-mmlab/mmcv}},
year={2018}
}
@misc{2020mmclassification,
title={OpenMMLab's Image Classification Toolbox and Benchmark},
author={MMClassification Contributors},
howpublished = {\url{https://github.com/open-mmlab/mmclassification}},
year={2020}
}