1
+ Authors: \
2
+ Fabian Isensee* , Yannick Kirchhoff* , Lars Kraemer, Max Rokuss, Constantin Ulrich, Klaus H. Maier-Hein
3
+
4
+ * : equal contribution
5
+
6
+ Author Affiliations:\
7
+ Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg \
8
+ Helmholtz Imaging
9
+
1
10
# Introduction
2
11
3
12
This document describes our submission to the [ Toothfairy2 Challenge] ( https://toothfairy2.grand-challenge.org/toothfairy2/ ) .
@@ -7,6 +16,8 @@ mirroring and train for 1500 instead of the standard 1000 epochs. Training was e
7
16
# Dataset Conversion
8
17
9
18
# Experiment Planning and Preprocessing
19
+ Adapt and run the [ dataset conversion script] ( ../../../nnunetv2/dataset_conversion/Dataset119_ToothFairy2_All.py ) .
20
+ This script just converts the mhs files to nifti (smaller file size) and removes the unused label ids.
10
21
11
22
## Extract fingerprint:
12
23
` nnUNetv2_extract_fingerprint -d 119 -np 48 `
@@ -160,18 +171,31 @@ Add the following configuration to the generated plans file:
160
171
Aside from changing the patch size this makes the architecture one stage deeper (one more pooling + res blocks), enabling
161
172
it to make effective use of the larger input
162
173
174
+ # Preprocessing
175
+ ` nnUNetv2_preprocess -d 119 -c 3d_fullres_torchres_ps160x320x320_bs2 -plans_name nnUNetResEncUNetLPlans -np 48 `
176
+
163
177
# Training
164
178
We train two models on all training cases:
165
179
166
180
``` bash
167
181
nnUNetv2_train 119 3d_fullres_torchres_ps160x320x320_bs2 all -p nnUNetResEncUNetLPlans -tr nnUNetTrainer_onlyMirror01_1500ep
168
182
nnUNet_results=${nnUNet_results} _2 nnUNetv2_train 119 3d_fullres_torchres_ps160x320x320_bs2 all -p nnUNetResEncUNetLPlans -tr nnUNetTrainer_onlyMirror01_1500ep
169
183
```
184
+ Models are trained from scratch.
185
+
170
186
Note how in the second line we overwrite the nnUNet_results variable in order to be able to train the same model twice without overwriting the results
171
187
172
188
# Inference
173
189
We ensemble the two models from above. On a technical level we copy the two fold_all folders into one training output
174
190
directory and rename them to fold_0 and fold_1. This lets us use nnU-Net's cross-validation ensembling strategy which
175
191
is more computationally efficient (needed for time limit on grand-challenge.org).
176
192
177
- Run inference with the inference script
193
+ Run inference with the [ inference script] ( inference_script_semseg_only_customInf2.py )
194
+
195
+ # Postprocessing
196
+ If the prediction of a class on some test case is smaller than the corresponding cutoff size then it is removed
197
+ (replaced with background).
198
+
199
+ Cutoff values were optimized using a five-fold cross-validation on the Toothfairy2 training data. We optimize HD95 and Dice separately.
200
+ The final cutoff for each class is then the smaller value between the two metrics. You can find our volume cutoffs in the inference
201
+ script as part of our ` postprocess ` function.
0 commit comments