📄 This is the official implementation of the paper:
ContourFormer:Real-Time Contour-Based End-to-End Instance Segmentation Transformer
Download the pretrained model from Google Drive.
conda create -n contourformer python=3.11.9
conda activate contourformer
pip install -r requirements.txtCOCO2017 Dataset
-
Download COCO2017 from OpenDataLab or COCO.
-
Modify paths in coco_poly_detection.yml
train_dataloader: img_folder: /data/COCO2017/train2017/ ann_file: /data/COCO2017/annotations/instances_train2017.json val_dataloader: img_folder: /data/COCO2017/val2017/ ann_file: /data/COCO2017/annotations/instances_val2017.json
SBD Dataset
-
Download COCO format SBD Dataset from here.
-
Modify paths in sbd_poly_detection.yml
train_dataloader: img_folder: /data/sbd/img/ ann_file: /data/sbd/annotations/sbd_train_instance.json val_dataloader: img_folder: /data/sbd/img/ ann_file: /data/sbd/annotations/sbd_trainval_instance.json
KINS dataset
-
Download the Kitti dataset from the official website.
-
Download the annotation file instances_train.json and instances_val.json from KINS.
-
Organize the dataset as the following structure:
├── /path/to/kitti │ ├── testing │ │ ├── image_2 │ │ ├── instances_val.json │ ├── training │ │ ├── image_2 │ │ ├── instances_train.json -
Modify paths in kins_poly_detection.yml
train_dataloader: img_folder: /data/kins_dataset/training/image_2/ ann_file: /data/kins_dataset/training/instances_train.json val_dataloader: img_folder: /data/kins_dataset/testing/image_2/ ann_file: /data/kins_dataset/testing/instances_val.json
python draw.py -c configs/contourformer/contourformer_hgnetv2_b3_sbd.yml -r weight/contourformer_hgnetv2_b3_sbd.pth -i your_image.jpg
COCO2017
- Training
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --master_port=7777 --nproc_per_node=4 train.py -c configs/contourformer/contourformer_hgnetv2_b2_coco.yml --seed=0CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --master_port=7777 --nproc_per_node=4 train.py -c configs/contourformer/contourformer_hgnetv2_b3_coco.yml --seed=0- Testing
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --master_port=7777 --nproc_per_node=4 train.py -c configs/contourformer/contourformer_hgnetv2_b2_coco.yml --test-only -r model.pthCUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --master_port=7777 --nproc_per_node=4 train.py -c configs/contourformer/contourformer_hgnetv2_b3_coco.yml --test-only -r model.pthSBD
- Training
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --master_port=7777 --nproc_per_node=4 train.py -c configs/contourformer/contourformer_hgnetv2_b2_sbd.yml --seed=0CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --master_port=7777 --nproc_per_node=4 train.py -c configs/contourformer/contourformer_hgnetv2_b3_sbd.yml --seed=0- Testing
CUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --master_port=7777 --nproc_per_node=4 train.py -c configs/contourformer/contourformer_hgnetv2_b2_sbd.yml --test-only -r model.pthCUDA_VISIBLE_DEVICES=0,1,2,3 torchrun --master_port=7777 --nproc_per_node=4 train.py -c configs/contourformer/contourformer_hgnetv2_b3_sbd.yml --test-only -r model.pthKINS
- Training
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --master_port=7777 --nproc_per_node=8 train.py -c configs/contourformer/contourformer_hgnetv2_b2_kins.yml --seed=0CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --master_port=7777 --nproc_per_node=8 train.py -c configs/contourformer/contourformer_hgnetv2_b3_kins.yml --seed=0- Testing
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --master_port=7777 --nproc_per_node=8 train.py -c configs/contourformer/contourformer_hgnetv2_b2_kins.yml --test-only -r model.pthCUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 torchrun --master_port=7777 --nproc_per_node=8 train.py -c configs/contourformer/contourformer_hgnetv2_b3_kins.yml --test-only -r model.pthIf you use ContourFormer or its methods in your work, please cite the following BibTeX entries:
bibtex
@misc{yao2025contourformerrealtimecontourbasedendtoendinstance,
title={ContourFormer:Real-Time Contour-Based End-to-End Instance Segmentation Transformer},
author={Weiwei Yao and Chen Li and Minjun Xiong and Wenbo Dong and Hao Chen and Xiong Xiao},
year={2025},
eprint={2501.17688},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2501.17688},
}Our work is built upon D-FINE. Thanks to the inspirations from D-FINE and PolySnake.
✨ Feel free to contribute and reach out if you have any questions! ✨