You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am interested in using nnunet for simple 2D prediction with MSK Ultrasound images, which are never bigger than 500x500. My issue is how slow nnunet seems to be vs other offerings in inference. I know that nnunet has many more steps than some of the other architectures (well, really just a larger algorithm around the AI algorithm).
To process 250 images, takes over 100s regardless of whether I use predict_single_npy_array or predict_from_list_of_npy_arrays. This is with a 80GB A100. Increasing the number of processes seems to actually decrease performance (i.e inference runs with 6, 8, 10, 15 processes on 250 images increased time to completion). [I have mirroing and gaussian off, with tile_step_size=1, performing everything on device)
I wonder if I am actually the right person to get batch inference for. I have small images, so the tiling-patches are roughly the same size, and there is very little post/pre processing with Ultrasound within nnunet (compared to other modalities?) My GPU is rarely above 50% utilisation when running nnunet.
But more fundamentally, is nnUNet actually the right choice for me? My suspicion is the 1-5% gains on DICE are not worth it for the inference improvement of other offerings - especially in MSK Ultrasound where segs that are used for accurate area cals are rare.
Thanks in advance!
The text was updated successfully, but these errors were encountered:
Hey folks,
I am interested in using nnunet for simple 2D prediction with MSK Ultrasound images, which are never bigger than 500x500. My issue is how slow nnunet seems to be vs other offerings in inference. I know that nnunet has many more steps than some of the other architectures (well, really just a larger algorithm around the AI algorithm).
To process 250 images, takes over 100s regardless of whether I use
predict_single_npy_array
orpredict_from_list_of_npy_arrays
. This is with a 80GB A100. Increasing the number of processes seems to actually decrease performance (i.e inference runs with 6, 8, 10, 15 processes on 250 images increased time to completion). [I have mirroing and gaussian off, with tile_step_size=1, performing everything on device)I wonder if I am actually the right person to get batch inference for. I have small images, so the tiling-patches are roughly the same size, and there is very little post/pre processing with Ultrasound within nnunet (compared to other modalities?) My GPU is rarely above 50% utilisation when running nnunet.
But more fundamentally, is nnUNet actually the right choice for me? My suspicion is the 1-5% gains on DICE are not worth it for the inference improvement of other offerings - especially in MSK Ultrasound where segs that are used for accurate area cals are rare.
Thanks in advance!
The text was updated successfully, but these errors were encountered: