Skip to content

Commit 94fdeee

Browse files
committed
toothfairy2 code complete
1 parent 6f184c8 commit 94fdeee

File tree

5 files changed

+564
-1
lines changed

5 files changed

+564
-1
lines changed

documentation/competitions/Toothfairy2.md renamed to documentation/competitions/Toothfairy2/Toothfairy2.md

Lines changed: 25 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,12 @@
1+
Authors: \
2+
Fabian Isensee*, Yannick Kirchhoff*, Lars Kraemer, Max Rokuss, Constantin Ulrich, Klaus H. Maier-Hein
3+
4+
*: equal contribution
5+
6+
Author Affiliations:\
7+
Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg \
8+
Helmholtz Imaging
9+
110
# Introduction
211

312
This document describes our submission to the [Toothfairy2 Challenge](https://toothfairy2.grand-challenge.org/toothfairy2/).
@@ -7,6 +16,8 @@ mirroring and train for 1500 instead of the standard 1000 epochs. Training was e
716
# Dataset Conversion
817

918
# Experiment Planning and Preprocessing
19+
Adapt and run the [dataset conversion script](../../../nnunetv2/dataset_conversion/Dataset119_ToothFairy2_All.py).
20+
This script just converts the mhs files to nifti (smaller file size) and removes the unused label ids.
1021

1122
## Extract fingerprint:
1223
`nnUNetv2_extract_fingerprint -d 119 -np 48`
@@ -160,18 +171,31 @@ Add the following configuration to the generated plans file:
160171
Aside from changing the patch size this makes the architecture one stage deeper (one more pooling + res blocks), enabling
161172
it to make effective use of the larger input
162173

174+
# Preprocessing
175+
`nnUNetv2_preprocess -d 119 -c 3d_fullres_torchres_ps160x320x320_bs2 -plans_name nnUNetResEncUNetLPlans -np 48`
176+
163177
# Training
164178
We train two models on all training cases:
165179

166180
```bash
167181
nnUNetv2_train 119 3d_fullres_torchres_ps160x320x320_bs2 all -p nnUNetResEncUNetLPlans -tr nnUNetTrainer_onlyMirror01_1500ep
168182
nnUNet_results=${nnUNet_results}_2 nnUNetv2_train 119 3d_fullres_torchres_ps160x320x320_bs2 all -p nnUNetResEncUNetLPlans -tr nnUNetTrainer_onlyMirror01_1500ep
169183
```
184+
Models are trained from scratch.
185+
170186
Note how in the second line we overwrite the nnUNet_results variable in order to be able to train the same model twice without overwriting the results
171187

172188
# Inference
173189
We ensemble the two models from above. On a technical level we copy the two fold_all folders into one training output
174190
directory and rename them to fold_0 and fold_1. This lets us use nnU-Net's cross-validation ensembling strategy which
175191
is more computationally efficient (needed for time limit on grand-challenge.org).
176192

177-
Run inference with the inference script
193+
Run inference with the [inference script](inference_script_semseg_only_customInf2.py)
194+
195+
# Postprocessing
196+
If the prediction of a class on some test case is smaller than the corresponding cutoff size then it is removed
197+
(replaced with background).
198+
199+
Cutoff values were optimized using a five-fold cross-validation on the Toothfairy2 training data. We optimize HD95 and Dice separately.
200+
The final cutoff for each class is then the smaller value between the two metrics. You can find our volume cutoffs in the inference
201+
script as part of our `postprocess` function.

documentation/competitions/Toothfairy2/__init__.py

Whitespace-only changes.

0 commit comments

Comments
 (0)