English | 简体中文
Backbone | Model | Images/GPU | Lr schd | FPS | Box AP | Mask AP | Download | Config |
---|---|---|---|---|---|---|---|---|
ResNet50-vd-SSLDv2-FPN | Faster | 1 | 1x | ---- | 41.4 | - | model | config |
ResNet50-vd-SSLDv2-FPN | Faster | 1 | 2x | ---- | 42.3 | - | model | config |
ResNet50-vd-SSLDv2-FPN | Mask | 1 | 1x | ---- | 42.0 | 38.2 | model | config |
ResNet50-vd-SSLDv2-FPN | Mask | 1 | 2x | ---- | 42.7 | 38.9 | model | config |
ResNet50-vd-SSLDv2-FPN | Cascade Faster | 1 | 1x | ---- | 44.4 | - | model | config |
ResNet50-vd-SSLDv2-FPN | Cascade Faster | 1 | 2x | ---- | 45.0 | - | model | config |
ResNet50-vd-SSLDv2-FPN | Cascade Mask | 1 | 1x | ---- | 44.9 | 39.1 | model | config |
ResNet50-vd-SSLDv2-FPN | Cascade Mask | 1 | 2x | ---- | 45.7 | 39.7 | model | config |
Backbone | Input shape | Images/GPU | Lr schd | FPS | Box AP | Download | Config |
---|---|---|---|---|---|---|---|
MobileNet-V1-SSLD | 608 | 8 | 270e | ---- | 31.0 | model | config |
MobileNet-V1-SSLD | 416 | 8 | 270e | ---- | 30.6 | model | config |
MobileNet-V1-SSLD | 320 | 8 | 270e | ---- | 28.4 | model | config |
Backbone | Input shape | Images/GPU | Lr schd | FPS | Box AP | Download | Config |
---|---|---|---|---|---|---|---|
MobileNet-V1-SSLD | 608 | 8 | 270e | - | 78.3 | model | config |
MobileNet-V1-SSLD | 416 | 8 | 270e | - | 79.6 | model | config |
MobileNet-V1-SSLD | 320 | 8 | 270e | - | 77.3 | model | config |
MobileNet-V3-SSLD | 608 | 8 | 270e | - | 80.4 | model | config |
MobileNet-V3-SSLD | 416 | 8 | 270e | - | 79.2 | model | config |
MobileNet-V3-SSLD | 320 | 8 | 270e | - | 77.3 | model | config |
Notes:
- SSLD is a knowledge distillation method. We use the stronger backbone pretrained model after distillation to further improve the detection accuracy. Please refer to the knowledge distillation tutorial.
@misc{cui2021selfsupervision,
title={Beyond Self-Supervision: A Simple Yet Effective Network Distillation Alternative to Improve Backbones},
author={Cheng Cui and Ruoyu Guo and Yuning Du and Dongliang He and Fu Li and Zewu Wu and Qiwen Liu and Shilei Wen and Jizhou Huang and Xiaoguang Hu and Dianhai Yu and Errui Ding and Yanjun Ma},
year={2021},
eprint={2103.05959},
archivePrefix={arXiv},
primaryClass={cs.CV}
}