-
Notifications
You must be signed in to change notification settings - Fork 27
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Sparse annotations for training #51
Comments
Hello @SebastienTs, yes this is possible with the regular The |
Great! Thanks so much Martin!
…On Thu 17 Aug 2023, 21:09 Martin Drawitsch, ***@***.***> wrote:
Hello @SebastienTs <https://github.com/SebastienTs>, yes this is possible
with the regular torch.nn.CrossEntropyLoss by setting the ignore_index
parameter to the label ID that you want to ignore. Since 0 is often
reserved for a background class I usually use other ID values. It just has
to be the ID that is used in the label files; no changes to the model
outputs are needed.
The DiceLoss in elektronn3 doesn't support an ignore_idx option but you
can use the weight parameter to set the channel weight of the class that
is to be ignored to 0.
—
Reply to this email directly, view it on GitHub
<#51 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABTFCUG7D2DZ6B4KC66BBBDXVZT5HANCNFSM6AAAAAA3UPGTAE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
Is there currently a way to use sparse annotations for training, i.e. for pixel classification by ignoring all the pixels set to 0 in the target image during loss computation and only considering pixels set to a value >0? If that is not possible, what would be the minimal modification to the code to achieve this (at least for some, or ideally for all existing losses)?
The text was updated successfully, but these errors were encountered: