-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QC assessment is not wroking #6
Comments
Are you running the code on GPU? |
Yes, I am running the system on a machine with GPU. Should I somehow let the code know that I am using GPU (I do not see any arguments for setting the GPU for quality assessment code)? The tissue segmentation code uses the GPU without any problem, but the output folder has the ".jpg" file; I do not see any text files. That might be the issue! |
I ran it one more time to see if it is just slow because of using CPU (although GPU is available); since yesterday morning, it is still in the processing of one slide. |
Is there anything that I can do? |
We have tested this on a CPU only device for a large cohort and it shouldn't take that long at all! Can you please check Line 136 in "quality-assessment/run.py"? Do you see the message "Processing YourSlideName"? For debugging you can simply comment out process_tiles (line 148). |
Sorry, I got really busy with a couple of tasks.
Yes, I see that
I did this and it generated the output images, and it did it really fast.
No, we have our own slides, but they are from both Philips and leica, and they are normal images. |
Is there anything we can do to fix this? |
I want to assess your models in our work and compare it with our in-house models. I got the tissue segmentation working with your instructions on the GitHub page, but the QC part does not work. I provided the inputs and it sticks to "Processing <slide_id>.svs" stage after printing these lines (the warning is something normal):
"""
/opt/conda/envs/imaging-compath/lib/python3.8/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and will be removed in 0.15, please use 'weights' instead.
warnings.warn(
/opt/conda/envs/imaging-compath/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or
None
for 'weights' are deprecated since 0.13 and will be removed in 0.15. The current behavior is equivalent to passingweights=ResNet18_Weights.IMAGENET1K_V1
. You can also useweights=ResNet18_Weights.DEFAULT
to get the most up-to-date weights.warnings.warn(msg)
=> loading checkpoint '/home/checkpoint_106.pth'
=> loaded checkpoint '/home/checkpoint_106.pth' (epoch 107)
slides mpp manually set to 1.0
"""
Is there a trick to pass this step?
The text was updated successfully, but these errors were encountered: