Semi-supervised Adaptive Distillation is a model compression method for object detection. Please refer to our paper for more details. The code is implemented with official detectron and Caffe2.
Student model | Baseline mAP | Teacher model | Baseline mAP | Student mAP after distillation |
---|---|---|---|---|
ResNet-50 | 34.3 | ResNet-101 | 36.0 | 36.5 |
ResNet-101 | 34.4 | ResNext-101 | 36.6 | 36.8 |
We use the input scale of 600 for ResNet-50 and 500 for ResNet-101. The results are reported on COCO mini-val.
We include the custom caffe2 in our code. The requirements is the same as the offical detectron and Caffe2. To run our codes with official Caffe2, please add 2 operators. One is located at caffe2/modules/detectron/pow_sum_op.h
and the other is located at caffe2/modules/detectron/sigmoid_focal_distillation_loss_op.h
.
Please follow the official installation step of detectron.
- Teacher model: ResNet-101. BaiduYun, Google Drive
- Teacher model: ResNext-101. BaiduYun, Google Drive
- The annotation file for COCO 2017 unlabel data produced by the ResNet-101 teacher model above. BaiduYun, Google Drive
- The annotation file for COCO 2017 unlabel data produced by the ResNext-101 teacher model above. BaiduYun, Google Drive
- Student model after distillation: ResNet-50. BaiduYun. Google Drive
- Student model after distillation: ResNet-101. BaiduYun. Google Drive
python2 tools/train_net.py \
--multi-gpu-testing \
--cfg configs/focal_distillation/retinanet_R-50-FPN_distillation.yaml \
--teacher_cfg configs/focal_distillation/retinanet_R-101-FPN_1x_teacher.yaml
We assume the weight file for teacher model is located at weights/R101_600/model_final.pkl
and the annotations file is located at lib/datasets/data/annotations/image_info_unlabeled2017_r101_600.json
.