-
-
Notifications
You must be signed in to change notification settings - Fork 653
[skip ci] Doctest for Accuracy #2287
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @Chandan-h-509 , I left a comment.
ignite/metrics/accuracy.py
Outdated
def process_function(engine, batch): | ||
y_pred, y = batch | ||
return y_pred, y | ||
engine = Engine(process_function) | ||
metric = Accuracy() | ||
metric.attach(engine, 'accuracy') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we could create a default evaluator as above and hide this part of code from the user ?
cc @ydcjeff
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel it would be good to have this part of the code so that it remains transparent.
Could you also explain the alternative that you are proposing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a way to define a global context using this:
Lines 344 to 353 in 74dabca
doctest_global_setup = """ | |
import torch | |
from torch import nn, optim | |
from ignite.engine import * | |
from ignite.handlers import * | |
from ignite.metrics import * | |
from ignite.utils import * | |
manual_seed(666) |
@vfdev-5 I have made the necessary change. |
|
||
|
||
metric_name='accuracy' | ||
engine=evaluator(Accuracy(), metric_name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Chandan-h-509 thanks for the update, but (here I agree with your previous comment) it is non-obvious to understand what it is evaluator
with its particular API as evaluator(Accuracy(), metric_name)
. I think a tradeoff would be something like
metric = Accuracy()
metric.attach(evaluator, "accuracy")
preds = torch.tensor([[1,0,0,1], ])
target = torch.tensor([[1,0,0,0], ])
state = evaluator.run([[preds, target], ])
print(state.metrics["accuracy"])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In addition, I think it would be more helpful to show all possible options while computing accuracy:
- binary case
- multiclass case
- multilabel case
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Chandan-h-509 thanks for the update, but (here I agree with your previous comment) it is non-obvious to understand what it is
evaluator
with its particular API asevaluator(Accuracy(), metric_name)
. I think a tradeoff would be something likemetric = Accuracy() metric.attach(evaluator, "accuracy") preds = torch.tensor([[1,0,0,1], ]) target = torch.tensor([[1,0,0,0], ]) state = evaluator.run([[preds, target], ]) print(state.metrics["accuracy"])
@vfdev-5 I didn't understand the solution clearly.
Could you please elaborate more on it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Chandan-h-509 current docstring you proposing in this PR does not look good especially this line:
engine=evaluator(Accuracy(), metric_name)
because it is not obvious to understand what it is evaluator
method and what is its API.
I was proposing you to rework the docstring and doctest_global_setup
such that the new docstring ressembles to https://github.com/pytorch/ignite/pull/2287/files#r735168912
Additionally, you need to add more docstrings for other cases as described https://github.com/pytorch/ignite/pull/2287/files#r735169293
@Chandan-h-509 Thank you for this PR. Are you still working on it ? |
@sdesrozis sorry for the delay.. I have been very busy these last 20days with my college exams and other activities.. I will try to complete it in a few days time. |
@Chandan-h-509 Thank you for your help. I close this PR since a doctest for Accuracy has been merged. |
Addresses #2265
Description: Added doctests for 'Accuracy'
Check list: