This repository provides training and inference codes for Deep Multi-Magnification Network published here. Deep Multi-Magnification Network automatically segments multiple tissue subtypes by a set of patches from multiple magnifications in histopathology whole slide images.
- Python 3.6.7
- Pytorch 1.3.1
- OpenSlide 1.1.1
- Albumentations
The main training code is training.py
. The trained segmentation model will be saved under runs/
by default.
In addition to config
, you may need to update the following variables before running training.py
:
n_classes
: the number of tissue subtype classes + 1train_file
andval_file
: the list of training and validation patches- Slide patches must be stored as
/path/slide_tiles/patch_1.jpg
,/path/slide_tiles/patch_2.jpg
, .../path/slide_tiles/patch_N.jpg
- The coresponding label patches must be stored as
/path/label_tiles/patch_1.png
,/path/label_tiles/patch_2.png
, .../path/label_tiles/patch_N.png
train_file
andval_file
must be formatted as
/path/,patch_1 /path/,patch_2 ... /path/,patch_N
- Slide patches must be stored as
d
: the number of pixels of each class in the training set for weighted cross entropy loss function
Note that pixels labeled as class 0 are unannotated and will not contribute to the training.
The main inference codes are slidereader_coords.py
and inference.py
. You first need to run slidereader_coords.py
to generate patch coordinates to be segmented in input whole slide images. After generating patch coordinates, you may run inference.py
to generate segmentation predictions of input whole slide images. The segmentation predictions will be saved under imgs/
by default.
You may need to update the following variables before running slidereader_coords.py
:
slides_to_read
: the list of whole slide imagescoord_file
: an output file listing all patch coordinates
In addition to model_path
and out_path
, you may need to update the following variables before running inference.py
:
n_classes
: the number of tissue subtype classes + 1test file
: the list of patch coordinates generated byslidereader_coords.py
data_path
: the path where whole slide images are located
Please download the pretrained breast model here.
Note that segmentation predictions will be generated in 4-bit BMP format. The size limit for 4-bit BMP files is 232 pixels.
Please find other pretrained segmentation models using Deep Multi-Magnification Network:
This project is under the CC-BY-NC 4.0 license. See LICENSE for details. (c) MSK
- This code is inspired by pytorch-semseg and MICCAI 2017 Robotic Instrument Segmentation.
If you find our work useful, please cite our paper:
@article{ho2021,
title={Deep Multi-Magnification Networks for multi-class breast cancer image segmentation},
author={Ho, David Joon and Yarlagadda, Dig V.K. and D'Alfonso, Timothy M. and Hanna, Matthew G. and Grabenstetter, Anne and Ntiamoah, Peter and Brogi, Edi and Tan, Lee K. and Fuchs, Thomas J.},
journal={Computerized Medical Imaging and Graphics},
year={2021},
volume={88},
pages={101866}
}