Skip to content

AIM-Harvard/ROCardS

Repository files navigation

ROCardS

Table of Contents

Introduction

This repository presents an auto-contouring algorithm [1] segmenting the entire heart, including its chambers and coronary arteries. This algorithm was trained on 560 thoracic planning computed tomography (CT) scans from Brigham and Women’s Hospital/Dana-Farber Cancer Institute and internally validated on 70 additional cases. External validation was conducted on 283 patients with lung or breast cancer treated at Cedars-Sinai Medical Center (2005-2020). This auto-contouring algorithm segments the whole heart, the heart chambers (LA, LV, RA, RV) and the heart coronaries (LAD, LCX, LM, PDA, RCA).

Please refer to modelLabels.csv for additional details on the algorithm segments.

Please refer to the main paper [1] referencing this work at this arxiv location : https://arxiv.org/abs/2511.14971.

Usage

Once all the python packages and python version are installed (please see Installation section), you can run the main script following these instructions:

CLI usage

ROCardS --input <inputFolder/inputFile> --output <outputFolder>

where:

  • <inputFolder/inputFile> path to a single CT image as .nii.gz/.nrrd to segment, or to a folder containing several .nii.gz/.nrrd CT images or single/several dicom-based images.
  • <outputFolder> output folder path where the output AI segmentations will be saved.

Please refer to the Input and Output types section for additional details on input and output data types and format.

Python usage

from ROCardS.pyROCardS import ROCardS
if __name__ == "__main__":
    # Provide input folder/file path and output folder
    ROCardS(input_path, output_path)

Example of ROCardS AI algoritm results on a public NSCLC-Radiomics CT case (CT-image-only viewer link)[2]: Alt text

From left to right, AI results on the CT case mentioned above: a) axial view with heart chambers highlighted, b) 3D view of heart chambers, c) 3D view of heart coronaries.

The AI results shown above can also be found here.


Requirements

  • Python 3.8/3.10
  • tensorflow == 2.10.0
  • numpy
  • rt_utils
  • pydicom
  • pynrrd
  • SimpleITK
  • pyplastimatch
  • plastimatch
  • scipy
  • scikit-image
  • pandas
  • tqdm

Installation

  1. Clone this repository.

  2. Create a virtual environment (i.e., with pip, conda) with Python 3.10 or 3.8 (versions which are tested and supported).

  3. Install all the dependencies and main pip package cd ROCardS & pip install .

Recommended and supported Python versions are 3.8 and 3.10. Please follow to the installation instructions to ensure reproducibility of this code. The algorithm was developed on Ubuntu 20.04.6 LTS, using Python 3.10.13. GPU used was NVIDIA RTX A6000, 49GB.

If you wish to leverage our tool on DICOM data, we internally rely on plastimatch and pyplastimatch tools for DICOM CT to .nrrd/.nii.gz conversion. Please follow the instructions here for plastimatch installation, depending on your OS.

Windows instructions

#1. Install Python 3.10
#Download the Windows installer for Python 3.10 from Python Releases.
#https://www.python.org/downloads/release/python-3100/
#During installation, check "Add Python to PATH".
#Verify install:
python --version

#2. Create a Virtual Environment
#In Command Prompt or PowerShell:
py -3.10 -m venv ROCardS

#3. Activate newly created environment
ROCardS\Scripts\activate

#4. Install ROCardS package
git clone https://github.com/AIM-Harvard/ROCardS
cd ROCardS
pip install .

Linux instructions

[Linux] Installation with conda and Python 3.10.13

# Conda, Python 3.10.13, inside cloned repo:
cd <repo_name>/
# Create conda env. using Python 3.10
conda create -n ROCardS python=3.10.13
conda activate ROCardS
#Install repo as pip package
git clone https://github.com/AIM-Harvard/ROCardS
cd ROCardS/
pip install .

[Linux] [Without conda] Using pip only with an activated python env having Python 3.10.13 :

git clone https://github.com/AIM-Harvard/ROCardS
cd ROCardS/
pip install .

[Linux] [Without conda] Using pyenv to create python 3.10.13 environment using CLI:

#1. Install linux packages for pyenv
sudo apt update && sudo apt install -y \
    make build-essential libssl-dev zlib1g-dev \
    libbz2-dev libreadline-dev libsqlite3-dev wget curl llvm \
    libncursesw5-dev xz-utils tk-dev libxml2-dev libxmlsec1-dev \
    libffi-dev liblzma-dev

#2. Install pyenv
curl -fsSL https://pyenv.run | bash
pyenv install 3.10.13

#3. Create a folder for destination python env ROCardS
mkdir ~/Documents/pyEnvs/ROCardS

#4. Make Python 3.10.13 link to our ROCardS environment folder
~/.pyenv/versions/3.10.13/bin/python -m venv ~/Documents/pyEnvs/ROCardS

#5. Activate the new python environment ROCardS
source ~/Documents/pyEnvs/ROCardS/bin/activate

#6. Install pip packages after activating the ROCardS python environment
git clone https://github.com/AIM-Harvard/ROCardS
cd ROCardS/
pip install .

Now that the python env is created and all the packages required are installed, you can run the main inference script from CLI :

ROCardS --input <inputFolder/inputFile> --output ouputFolder

Or from a python script:

from ROCardS.pyROCardS import ROCardS
if __name__ == "__main__":
    # Provide input folder/file path and output folder
    ROCardS(input_path, output_path)

If you wish to run this on GPU, you will also need to install Cuda 11.2 and CUDNN 8 (tested versions).


Input and Output types

The inference script supports DICOM/nii.gz/nrrd file formats, and single case/multicase input. Please follow the data structures described below:

Multiple cases:

ROCardS --input InputFolder --output OutputFolder
  • DICOM
   InputFolder/ 
       └── case1/ 
           └── dcm_slice1.dcm
           └── ...
           └── dcm_sliceN.dcm
       └── case2/ 
           └── dcm_slice1.dcm
           └── ...
           └── dcm_sliceN.dcm  

   Example output for above inputFolder structure:  
   OutputFolder/
       └── case1/ 
           └── NRRD/
               └── segment1.nrrd
               └── ...
               └── segmentN.nrrd
           └── AI_HEART_RTSTRUCT_<CT_image_serieUID>.dcm
       └── case2/ 
           └── NRRD/
               └── segment1.nrrd
               └── ...
               └── segmentN.nrrd
           └── AI_HEART_RTSTRUCT_<CT_image_serieUID>.dcm
  • NifTi (nii.gz)
   InputFolder/ 
       └── case1_ct.nii.gz
       └── case2_ct.nii.gz  

   Example output for above inputFolder structure:  
   OutputFolder/
       └── case1_ct/ 
           └── segment1.nii.gz
           └── ...
           └── segmentN.nii.gz
       └── case2_ct/ 
           └── segment1.nii.gz
           └── ...
           └── segmentN.nii.gz
  • NRRD (nrrd)
   InputFolder/ 
       └── case1_ct.nrrd
       └── case2_ct.nrrd  

   Example output for above inputFolder structure:  
   OutputFolder/
       └── case1_ct/ 
           └── segment1.nrrd
           └── ...
           └── segmentN.nrrd
       └── case2_ct/ 
           └── segment1.nrrd
           └── ...
           └── segmentN.nrrd

Single case:

  • DICOM
    ROCardS --input inputDcmFolder --output outputFolder/
    
    Where outputFolder looks like this after the script is finished:
    OutputFolder/
        └── NRRD/
            └── segment1.nrrd
            └── ...
            └── segmentN.nrrd
        └── AI_HEART_RTSTRUCT_<CT_image_serieUID>.dcm
    
  • NifTi (nii.gz)
    ROCardS --input image_ct.nii.gz --output outputFolder/
    
    Where outputFolder looks like this after the script is finished:
    OutputFolder/
        └── NRRD/
            └── segment1.nii.gz
            └── ...
            └── segmentN.nii.gz
    
  • NRRD (nrrd)
    ROCardS --input image_ct.nrrd --output outputFolder/
    
    Where outputFolder looks like this after the script is finished:
    OutputFolder/
        └── NRRD/
            └── segment1.nrrd
            └── ...
            └── segmentN.nrrd
    

License

License: CC BY-NC 4.0

This work, including our developed auto-contouring algorithm weights, is licensed under CC BY-NC 4.0, not for commercial use, only for research-purposes. To view a copy of this license, please visit https://creativecommons.org/licenses/by-nc/4.0/ or see the LICENSE file for the code and LICENSE_WEIGHTS file for the algorithm weights.

If you use this work in your research, please cite the following paper: PLACEHOLDER PAPER


Contact

If you have any question regarding this work, or any suggestions towards improvement, please raise an issue in github or contact us at cciausu@bwh.harvard.edu

References

[1] Guthier C. et al. Clinical Validation and Prospective Deployment of an Automated Deep Learning-Based Coronary Segmentation and Cardiac Toxicity Risk Prediction System. https://arxiv.org/abs/2511.14971

[3] Fedorov A. et al. NCI Imaging Data Commons. Cancer Res. 81, 4188–4193 (2021).

About

AI heart substructures segmentation model

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published