SegReg: Segmenting OARs by Registering MR Images and CT Annotations
ISBI 2024
Zeyu Zhang, Xuyin Qi, Bowen Zhang, Biao Wu, Hien Le, Bora Jeong, Zhibin Liao, Yunxiang Liu, Johan Verjans, Minh-Son To, Richard Hartley✉
Organ at risk (OAR) segmentation is a critical process in radiotherapy treatment planning such as head and neck tumors. Nevertheless, in clinical practice, radiation oncologists predominantly perform OAR segmentations manually on CT scans. This manual process is highly time-consuming and expensive, limiting the number of patients who can receive timely radiotherapy. Additionally, CT scans offer lower soft-tissue contrast compared to MRI. Despite MRI providing superior soft-tissue visualization, its time-consuming nature makes it infeasible for real-time treatment planning. To address these challenges, we propose a method called SegReg, which utilizes Elastic Symmetric Normalization for registering MRI to perform OAR segmentation. SegReg outperforms the CT-only baseline by 16.78% in mDSC and 18.77% in mIoU, showing that it effectively combines the geometric accuracy of CT with the superior soft-tissue contrast of MRI, making accurate automated OAR segmentation for clinical practice become possible.
(02/10/2024) 🎉 Our paper has been accepted to ISBI 2024!
(02/07/2024) 👉 Please see our latest work: 3D Medical Imaging Segmentation: A Comprehensive Survey for latest updates on 3D medical imaging segmentation.
(11/16/2023) 🎉 Our paper has been promoted by CVer.
@inproceedings{zhang2024segreg,
title={Segreg: Segmenting oars by registering mr images and ct annotations},
author={Zhang, Zeyu and Qi, Xuyin and Zhang, Bowen and Wu, Biao and Le, Hien and Jeong, Bora and Liao, Zhibin and Liu, Yunxiang and Verjans, Johan and To, Minh-Son and others},
booktitle={2024 IEEE International Symposium on Biomedical Imaging (ISBI)},
pages={1--5},
year={2024},
organization={IEEE}
}
2 Intel Xeon Platinum 8360Y 2.40GHz CPUs, 8 NVIDIA A100 40G GPUs, and 256GB of RAM
For docker container:
docker pull stevezeyuzhang/colab:1.7.1
For dependencies:
conda create -n segreg
conda activate segreg
conda install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=11.0 -c pytorch
cd SegReg/nnUNet
pip install -e .
export nnUNet_raw_data_base="/code/SegReg/DATASET/nnUNet_raw"
export nnUNet_preprocessed="/code/SegReg/DATASET/nnUNet_preprocessed"
export RESULTS_FOLDER="/code/SegReg/DATASET/nnUNet_trained_models"
source /root/.bashrc
For dataset, see https://han-seg2023.grand-challenge.org/
File directories as follows
├── SegReg
│ ├── DATASET
│ │ ├── nnUNet_preprocessed
│ │ ├── nnUNet_raw
│ │ │ ├── nnUNet_cropped_data
│ │ │ └── nnUNet_raw_data
│ │ │ │ ├── Task001_<TASK_NAME>
│ │ │ │ │ ├── dataset.json
│ │ │ │ │ ├── imagesTr
│ │ │ │ │ │ ├── case_01_0000.nii.gz
│ │ │ │ │ │ ├── case_01_0001.nii.gz
│ │ │ │ │ │ ├── case_02_0000.nii.gz
│ │ │ │ │ │ ├── case_02_0001.nii.gz
│ │ │ │ │ ├── imagesTs
│ │ │ │ │ ├── inferTs
│ │ │ │ │ ├── labelsTr
│ │ │ │ │ │ ├── case_01.nii.gz
│ │ │ │ │ │ ├── case_02.nii.gz
│ │ │ │ │ └── labelsTs
│ │ └── nnUNet_trained_models
│ └── nnUNet
python register.py <INSTANCE_NUMBER> <TRANSFORMATION>
For transformation, see https://antspy.readthedocs.io/en/latest/registration.html
nnUNet_plan_and_preprocess -t <TASK_ID>
nnUNet_train 3d_fullres nnUNetTrainerV2 <TASK_ID> <FOLD>
You can train your own model or find our checkpoint here.
nnUNet_predict -i /code/SegReg/DATASET/nnUNet_raw/nnUNet_raw_data/Task001_<TASK_NAME>/imagesTs -o /code/SegReg/DATASET/nnUNet_raw/nnUNet_raw_data/Task001_<TASK_NAME>/inferTs -t <TASK_ID> -m 3d_fullres -f <FOLD> -chk model_best
- ANTsPy: Advanced Normalization Tools in Python
- nnU-Net: A Self-Configuring Method for Deep Learning-based Biomedical Image Segmentation
- HaN-Seg: The head and neck organ-at-risk CT and MR segmentation dataset
Also thanks to the works we used in comparative studies:
- UaNet: Clinically applicable deep learning framework for organs at risk delineation in CT images
- SepNet: Automatic segmentation of organs-at-risk from head-and-neck CT using separable convolutional neural network with hard-region-weighted loss
- MAML: Modality-aware Mutual Learning for Multi-modal Medical Image Segmentation
We would also like to express our sincere gratitude to Dr. Yang Zhao for her genuine support of this work.