Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

questions and issues regarding training with nnUNetPlannerResEncL on BraTS2023 #2653

Open
neuronflow opened this issue Dec 23, 2024 · 0 comments
Assignees

Comments

@neuronflow
Copy link

neuronflow commented Dec 23, 2024

I am trying to train nnUNet on BraTS 2023 data (1251 exams).
The idea is to train on all cases and then evaluate on a separate test set.

With the standard 3d_fullres config, everything works without trouble.
However, nnUNet advertises I should use the residual encoders, so I try:

nnUNetv2_train 1337 3d_fullres all -p nnUNetPlannerResEncL -num_gpus 1

Notably, the training reaches pseudo dice 1 for some channels and .999 for others (can this flavor of nnUNet overfit the BraTS training set really so well?)

After training, I tried to run inference on our test set. Therefore, I used:

nnUNetv2_predict -i /input/imagesTs -o /output/predictions -d 1337 -f all -c 3d_fullres -p nnUNetResEncUNetLPlans 

For this to work, I had to manually copy some .json files. Am I using the wrong command here?

The resulting segmentations are terrible:
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants