Skip to content

jyang68sh/RepVGG-openMMLab

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

57 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

RepVGG-openMMLab

This repository is the code that needs to be submitted for OpenMMLab Algorithm Ecological Challenge,the paper is RepVGG: Making VGG-style ConvNets Great Again

####TODO checkout一个新分支来更新RingAllReduce分布式训练代码

Result

  • RepVGG-b2g4 top1 is 79.38
  • RepVGG-b3g4 top1 is 80.22
  • RepVGG-B3 top1 is 80.52

How TO Use?

What is MMCV?

MMCV is a foundational library for computer vision research and supports many research projects as below:

  • MMClassification:OpenMMLab image classification toolbox and benchmark.
  • MMDetection:OpenMMLab detection toolbox and benchmark.
  • MMDetection3D:OpenMMLab’s next-generation platform for general 3D object detection.
  • MMSegmentation:OpenMMLab semantic segmentation toolbox and benchmark.
  • MMAction2:OpenMMLab’s next-generation action understanding toolbox and benchmark.
  • MMTracking:OpenMMLab video perception toolbox and benchmark.
  • ...

Using MMCV for the first time?

If pytorch is not installed, can try conda :)

# Create a conda virtual environment and activate it

> conda create -n open-mmlab python=3.7 -y
> conda activate open-mmlab

# If you have CUDA 10.1 installed under /usr/local/cuda and would like to install PyTorch 1.5,you need to install the prebuilt PyTorch with CUDA 10.1

> conda install pytorch cudatoolkit=10.1 torchvision -c pytorch

and install MMClassification repository now:

  1. Install MMCV using MIM
> pip install git+https://github.com/open-mmlab/mim.git
> mim install mmcv-full
  1. clone MMClassification and install
> git clone https://github.com/open-mmlab/mmclassification.git
> cd mmclassification
> pip install -e .
  1. Register RepVGG in MMclassification
> cp RepVGG-openMMLab/backbones/RepVGG.py mmclassification/mmcls/models/backbones/

in mmclassification/mmcls/models/backbones/init.py

...
from .RepVGG import RepVGG

__all__ = [
    ..., 'RepVGG'
]
  1. copy config file to mmclassification/config
> cp RepVGG-openMMLab/config/repvggb2g4_b32x8.py mmclassification/config/
  1. Train Model(If you downloaded Imagenet)
> cd mmclassification

# single-gpu
> python tools/train.py config/repvggb2g4_b32x8.py [optional arguments]

# multi-gpu
> ./tools/dist_train.sh config/repvggb2g4_b32x8.py 8 [optional arguments]

# Optional arguments are:

# --no-validate (not suggested): By default, the codebase will perform evaluation at every k (default value is 1) epochs during the training. To disable this behavior, use --no-validate.

# --work-dir ${WORK_DIR}: Override the working directory specified in the config file.

# --resume-from ${CHECKPOINT_FILE}: Resume from a previous checkpoint file.

# Difference between resume-from and load-from: resume-from loads both the model weights and optimizer status, and the epoch is also inherited from the specified checkpoint. It is usually used for resuming the training process that is interrupted accidentally. load-from only loads the model weights and the training epoch starts from 0. It is usually used for finetuning

Download && Unzip ImageNet

It is recommended to symlink the dataset root to $MMCLASSIFICATION/data. If your folder structure is different, you may need to change the corresponding paths in config files

# data/download_imagenet.sh ,this script can automatically build the file structure that Imagenet needs for mmcls

> mkdir -p mmclassification/data
> cp RepVGG-openMMLab/data/download_imagenet.sh mmclassification/data
> cd mmclassification/data
> bash download_imagenet.sh

Pre-trained Model

pre-trained model in Google Drive

  • RepVGGB2g4.pth

  • RepVGGB3g4.pth

  • RepVGGB3.pth

  • RepVGG-B2g4-deploy.pth

  • RepVGG-B3g4-deploy.pth

  • RepVGG-B3-deploy.pth

Test Model

# in mmclassification

# single-gpu
> python tools/test.py config/repvggb2g4_b32x8.py ${CHECKPOINT_FILE} [--metrics ${METRICS}] [--out ${RESULT_FILE}]


# multi-gpu
> ./tools/dist_test.sh config/repvggb2g4_b32x8.py ${CHECKPOINT_FILE} 8 [--metrics ${METRICS}] [--out ${RESULT_FILE}]

# CHECKPOINT_FILE: checkpoint path

# RESULT_FILE: Filename of the output results. If not specified, the results will not be saved to a file. Support formats include json, yaml and pickle

# METRICS:Items to be evaluated on the results, like accuracy, precision, recall, etc

Depoly or Inference

If you deploy the model, Or test the accuracy of the "deploy model", can use it like this(The result is basically the same,No loss of accuracy)

  1. Modify the default configuration file
#in config/repvggb2g4_b32x8.py

model = dict( ...
           backbone =dict(
                        ...
                        deploy = True
                         )
            )
  1. Use the parameter file that has been converted (like RepVGG-B2g4-deploy.pth)
> cd mmclassification/

> python tools/test.py config/repvggb2g4_b32x8.py RepVGG-B2g4-deploy.pth --metrics accuracy
  1. Why the structure of the RepVGG is different between the training process and the inference process? The answer is here

Citation

If you find this project useful in your research, please consider cite:

@misc{mmcv,
    title={{MMCV: OpenMMLab} Computer Vision Foundation},
    author={MMCV Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmcv}},
    year={2018}
}

@misc{2020mmclassification,
    title={OpenMMLab's Image Classification Toolbox and Benchmark},
    author={MMClassification Contributors},
    howpublished = {\url{https://github.com/open-mmlab/mmclassification}},
    year={2020}
}

reference

About

Competition

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 96.6%
  • Shell 3.4%