This notebook provides a minimal working example of TotalSegmentator, a tool for the segmentation of 104 anatomical structures from CT images. The model was trained using a wide range of imaging CT data of different pathologies from several scanners, protocols and institutions.
We test TotalSegmentator by implementing an end-to-end (cloud-based) pipeline on publicly available whole body CT scans hosted on the Imaging Data Commons (IDC), starting from raw DICOM CT data and ending with a DICOM SEG object storing the segmentation masks generated by the AI model. The testing dataset we use is external and independent from the data used in the development phase of the model (training and validation) and is composed by a wide variety of image types (from the area covered by the scan, to the presence of contrast and various types of artefacts).
The way all the operations are executed - from pulling data, to data postprocessing, and the standardisation of the results - have the goal of promoting transparency and reproducibility.
You can find the notebook here.
Please cite the following article if you use this code or pre-trained models:
Wasserthal, J., Meyer, M., Breit, H.C., Cyriac, J., Yang, S. and Segeroth, M., 2022. TotalSegmentator: robust segmentation of 104 anatomical structures in CT images. arXiv preprint arXiv:2208.05868, https://doi.org/10.48550/arXiv.2208.05868.
The original code is published on GitHub using the Apache-2.0 license.