Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is nnunet right for me? (Realtime MSK Ultrasound Use Case) #2655

Open
Sharpz7 opened this issue Dec 23, 2024 · 1 comment
Open

Is nnunet right for me? (Realtime MSK Ultrasound Use Case) #2655

Sharpz7 opened this issue Dec 23, 2024 · 1 comment
Assignees

Comments

@Sharpz7
Copy link

Sharpz7 commented Dec 23, 2024

Hey folks,

I am interested in using nnunet for simple 2D prediction with MSK Ultrasound images, which are never bigger than 500x500. My issue is how slow nnunet seems to be vs other offerings in inference. I know that nnunet has many more steps than some of the other architectures (well, really just a larger algorithm around the AI algorithm).

To process 250 images, takes over 100s regardless of whether I use predict_single_npy_array or predict_from_list_of_npy_arrays. This is with a 80GB A100. Increasing the number of processes seems to actually decrease performance (i.e inference runs with 6, 8, 10, 15 processes on 250 images increased time to completion). [I have mirroing and gaussian off, with tile_step_size=1, performing everything on device)

I wonder if I am actually the right person to get batch inference for. I have small images, so the tiling-patches are roughly the same size, and there is very little post/pre processing with Ultrasound within nnunet (compared to other modalities?) My GPU is rarely above 50% utilisation when running nnunet.

But more fundamentally, is nnUNet actually the right choice for me? My suspicion is the 1-5% gains on DICE are not worth it for the inference improvement of other offerings - especially in MSK Ultrasound where segs that are used for accurate area cals are rare.

Thanks in advance!

@Sharpz7
Copy link
Author

Sharpz7 commented Dec 23, 2024

I will also note that with predict_single_npy_array and a separate multiprocessing pool I create, I can get double inference speed with 2 processes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants