This is the official PyTorch implementation of VSGD-Net: Virtual Staining Guided Melanocyte Detection on Histopathological Images
- Linux or macOS
- Python 2 or 3
- NVIDIA GPU (11G memory or larger) + CUDA cuDNN
- Install PyTorch and dependencies from http://pytorch.org
- Install python libraries dominate.
pip install dominate- Clone this repo:
git clone https://github.com/kechunl/VSGD-Net.git
cd VSGD-Net- A few example H&E skin biopsy images are included in the
datasets/test_Afolder. - Please download the pre-trained Melanocyte model from here (google drive link), and unzip it under
./checkpoints/. - Test the model (
bash ./scripts/test_melanocyte.sh)
The test results will be saved to a html file here: ./results/melanocyte/test_latest/index.html
- An example training script is provided (
./scripts/train_melanocyte.sh):
# Multi-GPU, use decoder feature in FPN, use attention module
CUDA_VISIBLE_DEVICES=0,1,2,3 python -m torch.distributed.launch --nproc_per_node 4 --master_port 28501 train.py --name Melanocyte_Attn_DecoderFeat --dataroot DATA_PATH --resize_or_crop none --gpu_ids 0,1,2,3 --batchSize 2 --no_instance --loadSize 256 --ngf 32 --has_real_image --save_epoch_freq 5 --use_resnet_as_backbone --use_UNet_skip --fpn_feature decoder --niter_decay 200
Note: please specify the data path as explained in Training with your own dataset.
- If you want to train with your own dataset, please generate the corresponding image patches and name the folders as
train_Aandtrain_B. For detection purpose, you should also name the mask folder astrain_mask. In our paper, we use 256x256 patched in 10x magnification. Please refer to our paper for the preprocessing steps. - The default setting for preprocessing is
nonewhich will do nothing other than making sure the image is divisible by 32. If you want a different setting, please change it by using the--resize_or_cropoption. For example,scale_width_and_cropfirst resizes the image to have widthopt.loadSizeand then does random cropping of size(opt.fineSize, opt.fineSize).cropskips the resizing step and only performs random cropping.scale_widthscales the width of all training images toopt.loadSize(256) while keeping the aspect ratio.
- Flags: see
options/train_options.pyandoptions/base_options.pyfor all the training flags; seeoptions/test_options.pyandoptions/base_options.pyfor all the test flags.
This project is based on Pix2PixHD.