Skip to content

Latest commit

 

History

History
58 lines (39 loc) · 1.97 KB

README.md

File metadata and controls

58 lines (39 loc) · 1.97 KB

nnSAM: Plug-and-play Segment Anything Model Improves nnUNet Performance

Our entire code is built based on nnUNet, and you can follow the nnUNet instructions exactly.

Install nnSAM depending on your use case:

conda create -n nnsam python=3.9
conda activate nnsam
pip install git+https://github.com/ChaoningZhang/MobileSAM.git
pip install timm
pip install git+https://github.com/Kent0n-Li/nnSAM.git

It is important to input "set MODEL_NAME=nnsam" before using it.

set MODEL_NAME=nnsam
nnUNetv2_plan_and_preprocess -d DATASET_ID --verify_dataset_integrity

nnUNetv2_train DATASET_NAME_OR_ID UNET_CONFIGURATION FOLD [additional options, see -h]

nnUNetv2_train DATASET_NAME_OR_ID UNET_CONFIGURATION FOLD --val --npz

nnUNetv2_train DATASET_NAME_OR_ID 2d FOLD

nnUNetv2_train DATASET_NAME_OR_ID 3d_fullres FOLD

nnUNetv2_predict -i INPUT_FOLDER -o OUTPUT_FOLDER -d DATASET_NAME_OR_ID -c CONFIGURATION --save_probabilities

How to get started?

Read these:

Additional information:

Acknowledgements

nnU-Net is developed and maintained by the Applied Computer Vision Lab (ACVL) of Helmholtz Imaging and the Division of Medical Image Computing at the German Cancer Research Center (DKFZ).