You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to train nnUNet on BraTS 2023 data (1251 exams).
The idea is to train on all cases and then evaluate on a separate test set.
With the standard 3d_fullres config, everything works without trouble.
However, nnUNet advertises I should use the residual encoders, so I try:
nnUNetv2_train 1337 3d_fullres all -p nnUNetPlannerResEncL -num_gpus 1
Notably, the training reaches pseudo dice 1 for some channels and .999 for others (can this flavor of nnUNet overfit the BraTS training set really so well?)
After training, I tried to run inference on our test set. Therefore, I used:
I am trying to train nnUNet on BraTS 2023 data (1251 exams).
The idea is to train on all cases and then evaluate on a separate test set.
With the standard
3d_fullres
config, everything works without trouble.However, nnUNet advertises I should use the residual encoders, so I try:
Notably, the training reaches pseudo dice 1 for some channels and .999 for others (can this flavor of nnUNet overfit the BraTS training set really so well?)
After training, I tried to run inference on our test set. Therefore, I used:
For this to work, I had to manually copy some .json files. Am I using the wrong command here?
The resulting segmentations are terrible:
The text was updated successfully, but these errors were encountered: