-
Notifications
You must be signed in to change notification settings - Fork 615
doctest update metrics #2144
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
doctest update metrics #2144
Conversation
You are owners of some files modified in this pull request. |
* updated contrastive.py * updated focal_loss.py * updated giou_loss.py * updated kappa_loss.py * updated npairs.py * updated quantiles.py * reformatting giou_loss.py * updated testable docs contrastive.py * updated doctests focal_loss.py * updated testdocs giou_loss.py * updated testdocs kappa_loss.py * updated testdocs npairs.py * updated testdocs quantiles.py * minor changes to formatting * fixing multi-line error * adding empty line inbetween imports * adding empty line inbetween imports * reformatting imports * reverting code formatting changes * reformatting imports
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the contribution! Can we canonicalize the variable names? Like
actuals
->y_true
preds
->y_pred
- all metric instance should be named
metric
- all results from
metric(y_true, y_pred)
should be namedresult
.
>>> m = tfa.metrics.CohenKappa(num_classes=5, sparse_labels=True) | ||
>>> # m.update_state(actuals, preds) | ||
>>> # print('Final result: ', m.result().numpy()) | ||
>>> # Final result: 0.61904764 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why these are commented out?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please see the image below.
I don't reckon what do you mean by 'all results from metric(y_true, y_pred) should be named result'. do you mean whenever a metric is directly called or do you mean metric.update_state(y,ypred), should be assigned to result?. if I assign update_state to result the print statement should be like
print('Final result: ', result.result().numpy())
can you please check this PR. it has a similar issue as described below. I have not reverted back to the state where all checks were cleared.
>>> m = tfa.metrics.CohenKappa(num_classes=5, sparse_labels=True) | ||
>>> # m.update_state(actuals, preds, sample_weight=weights) | ||
>>> # print('Final result: ', m.result().numpy()) | ||
>>> # Final result: 0.37209308 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ditto
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's not complete ditto the second example uses weights.
>>> preds = tf.constant([[1.0], [0.0], [1.0], [1.0]], | ||
... dtype=tf.float32) | ||
>>> # Matthews correlation coefficient | ||
>>> mcc = MatthewsCorrelationCoefficient(num_classes=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tfa.metrics.MatthewsCorrelationCoefficient
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had my doubt about this but since it was working completely fine. So, I left it as it is.
This is probably stale as #2171 exists. Let me close it but feel free to reopen it if I make the mistake. |
Fixes part of # 2066