-
Notifications
You must be signed in to change notification settings - Fork 102
BUG: Fixed Performance Issue In semantic_segmentor.py #313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Moved function self._precess_predictions() out of for loop which was causing major (unnecessary) performance loss for segmentation prediction - especially on larger images/wsi (now up to 12x faster)
…e-semantic_segmentor.py
Codecov Report
@@ Coverage Diff @@
## develop #313 +/- ##
===========================================
+ Coverage 99.82% 99.84% +0.02%
===========================================
Files 53 55 +2
Lines 5004 5173 +169
Branches 823 878 +55
===========================================
+ Hits 4995 5165 +170
Misses 2 2
+ Partials 7 6 -1
Continue to review full report at Codecov.
|
…om/TIA-Lab/tiatoolbox into fix-performance-semantic-segmentor
shaneahmed
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @vqdang
I will merge once @simongraham has approved it.
simongraham
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks fine.
simongraham
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks fine.
|
@vqdang thank you a lot for implementing the fix. Great work. |
Carry on from #285 with a dedicated branch for this fix.
Linked to #287
Check list:
@ByteHexler To add clarification on the origin of this performance issue.
tiatoolbox/tiatoolbox/models/engine/semantic_segmentor.py
Lines 670 to 673 in ca0ece6
self._process_predictionsis called every timecum_outputis updated. We will processn**2time compared to the normalntimes. As such, depending on the size of the WSI, this can lead to a significant slowdown, and potentially crash the system because many machines won't be able to hold an entire WSI in the memory. In the initial design, we assumed thattiatoolbox/tiatoolbox/models/engine/semantic_segmentor.py
Line 681 in ca0ece6
tiatoolbox/tiatoolbox/models/engine/semantic_segmentor.py
Line 736 in ca0ece6
free_prediction=Truewill free up the prediction in place, however, this did not work as expected.