Official repository for the MICCAI 2022 SASHIMI Workshop paper: Subject-Specific Lesion Generation and Pseudo-Healthy Synthesis for Multiple Sclerosis Brain Images
We propose an attention-based generative adversarial network (GAN) for simultaneous multiple sclerosis (MS) lesion generation and pseudo-healthy synthesis. Our 2D framework employs two generators; one to generate pathological images from healthy images, and a second to synthesize healthy images from pathological ones. We utilise three discriminators, a discriminator for healthy images, a discriminator for pathological images, and a third for the foreground of the pathological generator, which encourages the generator to focus on lesion-related regions.
Subject-specific multiple sclerosis lesion generation | Pseudo-healthy synthesis of multiple sclerosis images |
---|---|
![]() |
![]() |
The changes in ventricular volume are explained in the paper
Subject-Specific Lesion Generation and Pseudo-Healthy Synthesis for Multiple Sclerosis Brain Images.
Berke Doğa Başaran1,2, Mengyun Qiao2, Paul M Matthews3,4, Wenjia Bai1,2,3
1Department of Computing, Imperial College London, UK.
2Data Science Institute, Imperial College London, UK.
3Department of Brain Sciences, Imperial College London, UK.
4 UK Dementia Research Institute, Imperial College London, London, UK.
The repository offers the official implementation of our paper in PyTorch.
Copyright (C) 2022 Imperial College London, UK.
All rights reserved. Licensed under the CC BY-NC-SA 4.0 (Attribution-NonCommercial-ShareAlike 4.0 International)
The code is released for academic research use only. For commercial use, please contact bdb19@ic.ac.uk.
Clone this repo.
git clone https://github.com/dogabasaran/lesion-synthesis
cd lesion-synthesis/
This code requires PyTorch 0.4.1+ and python 3.6.9+. Please install dependencies by
pip install -r requirements.txt
By default, the code will not generate lesion masks for the synthetic pathological images. However, if you wish to generate lesion masks for the synthetic pathological images you will need the MIRTK and FSL toolboxes. We use the FSL and MIRTK toolboxes to do brain extraction and registration in order to accurately extract the masks of the synthetic lesions.
See here for FSL installation.
See here for MIRTK installation.
In order to generate the lesion masks, pass in the option --saveMask
during testing (see below).
Training and testing data should be in the following structure
datasets/
└──dataset_name
├── trainA (training images from domain A - healthy images)
│ ├── healthy_1.nii.gz
│ ├── healthy_2.nii.gz
: :
├──trainB (training images from domain B - pathological images with lesions)
│ ├── pathological_1.nii.gz
│ ├── pathological_1.nii.gz
: :
├── trainC (training images from domain C - masked lesion intensity images)
│ ├── foreground_1.nii.gz
│ ├── foreground_2.nii.gz
: :
├── testA (testing images from domain A - healthy --> pathological)
│ ├── healthy_test_1.nii.gz
│ ├── healthy_test_2.nii.gz
: :
└── testB (testing images from domain B - pathological --> healthy)
├── pathological_test_1.nii.gz
:
└── pathological_test_2.nii.gz
The code is optimised for files in NifTI file format (ending in .nii.gz). Images should be of size (1, 256, 256) in grayscale. If you wish to use 3-channel RGB images you will need to change models/networks.py
NifTI image files do not need to be named in a specific format.
In our paper we refer to the three different domains as the "Healthy (H)", "Pathological (P)", and "Foreground (F)" domains. In the code we use "A" for the healthy domain, "B" for the pathological domain, and "C" for the foreground domain.
- Download a dataset
- To view training results and loss plots, run
python3 -m visdom.server
and click the URL http://localhost:8097. - To reproduce the results reported in the paper, you would need an NVIDIA GeForce RTX 3080.
- Train a model:
python3 train.py --dataroot ./datasets/{dataset_name}/ --name newmodel_name --model agan_foreground -dataset_mode lesion --pool_size 50 --batch_size 1 --niter 100 --niter_decay 100 --gpu_ids 0 --display_id 0 --display_freq 100 --print_freq 100
We have two separate testing scripts, one for the A --> B --> A (Healthy --> Pathological --> Healthy) loop, and the second for the B --> A --> B (Pathological --> Healthy --> Pathological) loop.
The results will be saved at ./results/
. Use --results_dir {directory_path_to_save_result}
to specify the results directory.
Use --saveMask
to save the lesion masks of the generated pathological images. You will need to add FSL and MIRTK toolbox commands to your PATH.
For A --> B --> A:
You need to have healthy images in the datasets/{dataset_name}/testA/ directory.
This will produce synthetic pathological images in results/{task_name}/{epoch}/images/fake_B/ directory, and re-created healthy images in results/{task_name}/{epoch}/images/rec_A/.
python3 test.py --dataroot ./datasets/{dataset_name}/ --name newmodel_name --model agan_testa --dataset_mode lesion_testa --norm instance --phase test --batch_size 1 --gpu_ids 0 --num_test 5000 --epoch latest
For B --> A --> B:
You need to have healthy images in the datasets/{dataset_name}/testB/ directory.
This will produce synthetic pathological images in results/{task_name}/{epoch}/images/fake_A/ directory, and re-created healthy images in results/{task_name}/{epoch}/images/rec_B/.
python3 test.py --dataroot ./datasets/{dataset_name}/ --name newmodel_name --model agan_testb --dataset_mode lesion_testb --norm instance --phase test --batch_size 1 --gpu_ids 0 --num_test 5000 --epoch latest
If you use this code for your research, please cite our paper.
@misc{basaran2022synthlesion,
doi = {10.48550/ARXIV.2208.02135},
author = {Basaran, Berke Doga and Qiao, Mengyun and Matthews, Paul M. and Bai, Wenjia},
title = {Subject-Specific Lesion Generation and Pseudo-Healthy Synthesis for Multiple Sclerosis Brain Images},
publisher = {arXiv},
year = {2022}
}
This source code is inspired by CycleGAN, GestureGAN, SelectionGAN and AttentionGAN.
If you have any questions/comments/bug reports, feel free to open a github issue or pull a request or e-mail to the author Berke Doga Basaran (bdb19@ic.ac.uk).
If you'd like to work together or get in contact with me, please email bdb19@ic.ac.uk.