-
Notifications
You must be signed in to change notification settings - Fork 19.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add weighted_metrics arg to compile #7536
Conversation
Could you list a few use cases where this approach is better? This would be useful data for picking one API or the other. |
I think this approach is better because the flexibility covers both the use cases:
The other API option, simply setting You might be interested in tracking both a weighted and unweighted metric when you are optimizing for some weighted sub-problem that you have devised, but you still want to make sure the model is performing its initial job well, that is, its accuracy regardless of the weights (assuming a classification problem). It also seems likely you'd want to understand the relationship between the performance of the weighted and unweighted metrics as you tinker with your sample weights. For example, it might be the case that you are choosing weights that perform really well on your weighted metric but your unweighted metric tanks, which could raise red flags and cause you to reconsider your approach. It also seems possible in the multi-output scenario that someone would want to compute a weighted metric for one output and an unweighted metric for another. This API would handle that use case very easily, whereas the other would not. For example:
I should probably add a test for this use case, actually. |
I agree this PR is the better approach, so we will go with this one. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Style: lines in the test are too long, please shorten.
keras/engine/training.py
Outdated
@@ -637,6 +597,8 @@ def compile(self, optimizer, loss, metrics=None, loss_weights=None, | |||
If the model has multiple outputs, you can use a different | |||
`sample_weight_mode` on each output by passing a | |||
dictionary or a list of modes. | |||
weighted_metrics: list of metrics to be evaluated and weighted | |||
by sample_weight or class_weight during training and testing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix indentation
standard_weight = 1 | ||
standard_score_sequential = 0.5 | ||
|
||
decimal_precision = { | ||
'cntk': 2, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CNTK is only accurate to two decimal places. I'm not sure why.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks
Alternative to #7482.
This implementation is growing on me. I think the flexibility here is nice.