This repository is an official implementation of the ACM Multimedia 2021 paper Implicit Feature Refinement for Instance Segmentation.
TL; DR. Implicit feature refinement (IFR) enjoys several advantages: 1) simulates an infinite-depth refinement network while only requiring parameters of single residual block; 2) produces high-level equilibrium instance features of global receptive field; 3) serves as a general plug-and-play module easily extended to most object recognition frameworks.
- Install cvpods following the instructions
# Install cvpods
git clone https://github.com/Megvii-BaseDetection/cvpods.git
cd cvpods
## build cvpods (requires GPU)
python3 setup.py build develop
## preprare data path
mkdir datasets
ln -s /path/to/your/coco/dataset datasets/coco
-
To save the training and testing time, the explicit form of our IFR, annotated with "weight_sharing", is provided on mask_rcnn to achieve competitive performance.
-
For fast evaluation, please download trained model from here.
-
Run the project
git clone https://github.com/lufanma/IFR.git
# for example(e.g. mask_rcnn.ifr)
cd IFR/mask_rcnn.ifr.res50.fpn.coco.multiscale.1x/
# train
sh pods_train.sh
# test
sh pods_test.sh
# test with provided weights
sh pods_test.sh \
MODEL.WEIGHTS /path/to/your/save_dir/ckpt.pth # optional
OUTPUT_DIR /path/to/your/save_dir # optional
Model | AP | AP50 | AP75 | APs | APm | APl | Link |
---|---|---|---|---|---|---|---|
mask_rcnn.ifr.res50.fpn.coco.multiscale.1x | 36.3 | 56.8 | 39.2 | 17.3 | 39.0 | 52.2 | download |
mask_rcnn.res50.fpn.coco.multiscale.weight_sharing.1x | 35.9 | 56.7 | 38.5 | 17.1 | 38.5 | 51.8 | download |
cascade_rcnn.ifr.res50.fpn.coco.800size.1x | 36.9 | 57.1 | 39.8 | 17.4 | 39.3 | 54.6 | download |
If you find IFR useful to your research, please consider citing:
@inproceedings{ma2021implicit,
title={Implicit Feature Refinement for Instance Segmentation},
author={Ma, Lufan and Wang, Tiancai and Dong, Bin and Yan, Jiangpeng and Li, Xiu and Zhang, Xiangyu},
booktitle={Proceedings of the 29th ACM International Conference on Multimedia},
pages={3088--3096},
year={2021}
}
Given thanks to the open source of DEQ and MDEQ, our IFR is developed based on them.