If you want to distill model in OpenMMLab related repos, please use MMRazor!!
If you are intrested in KD,you also could contact me by Wechat, and I will invite you to the KD group.
This project is based on mmdetection(v-2.9.0), all the usage is the same as mmdetection including training , test and so on.
- Channel-wise Distillation for Semantic Segmentation
- Improve Object Detection with Feature-based Knowledge Distillation: Towards Accurate and Efficient Detectors
-
Set up a new conda environment:
conda create -n distiller python=3.7
-
Install pytorch
-
Install mmdetection-distiller
git clone https://github.com/pppppM/mmdetection-distiller.git cd mmdetection-distiller pip install -r requirements/build.txt pip install -v -e .
#single GPU
python tools/train.py configs/distillers/cwd/cwd_retina_rx101_64x4d_distill_retina_r50_fpn_2x_coco.py
#multi GPU
bash tools/dist_train.sh configs/distillers/cwd/cwd_retina_rx101_64x4d_distill_retina_r50_fpn_2x_coco.py 8
#single GPU
python tools/test.py configs/distillers/cwd/cwd_retina_rx101_64x4d_distill_retina_r50_fpn_2x_coco.py $CHECKPOINT --eval bbox
#multi GPU
bash tools/dist_train.sh configs/distillers/cwd/cwd_retina_rx101_64x4d_distill_retina_r50_fpn_2x_coco.py $CHECKPOINT 8 --eval bbox
This project is released under the Apache 2.0 license.