-
Notifications
You must be signed in to change notification settings - Fork 2.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Support multiple losses during training #818
Conversation
Codecov Report
@@ Coverage Diff @@
## master #818 +/- ##
==========================================
+ Coverage 87.64% 89.09% +1.44%
==========================================
Files 108 112 +4
Lines 5886 6081 +195
Branches 958 977 +19
==========================================
+ Hits 5159 5418 +259
+ Misses 535 468 -67
- Partials 192 195 +3
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
Should add more unittests to improve the coverage. |
c459c0b
to
73980c4
Compare
Task linked: CU-k5tuzw mix loss |
Should add more unitests to improve the code coverage. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM except for the missing unittest
Please fix the lint error. |
* multiple losses * fix lint error * fix typos * fix typos * Adding Attribute * Fixing loss_ prefix * Fixing loss_ prefix * Fixing loss_ prefix * Add Same * loss_name must has 'loss_' prefix * Fix unittest * Fix unittest * Fix unittest * Update mmseg/models/decode_heads/decode_head.py Co-authored-by: Junjun2016 <hejunjun@sjtu.edu.cn>
Because of being asked frequently by communities, I directly cloned related PR of multiple losses implementation and make a new PR.
Related PR: #244
Related Issues: #779, #727, #486 and so on.
Here is my results on UNet with backbone
UNet-S5-D16
and modelFCN
:Note:
(1)
CE
means its loss function is cross entropy andDC
means dice loss.(2)
CE
is cross entropy loss, which is default loss function of MMSegmentation config, I reproduce training to testify real difference between different loss settings.(3)
loss_weight
is also important. For instance,(0.5 : 1)
below means the weight of cross entropy lossCE
and dice lossDC
is 0.5 and 1, respectively.(4) I use
--seed 0
but still have some variances from same config setting of different training experiments.