Skip to content

[skip ci] Doctest for Accuracy #2287

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from

Conversation

Chandan-h-509
Copy link
Contributor

@Chandan-h-509 Chandan-h-509 commented Oct 22, 2021

Addresses #2265

Description: Added doctests for 'Accuracy'

Check list:

  • New tests are added (if a new feature is added)
  • New doc strings: description and/or example code are in RST format
  • Documentation is updated (if required)

@github-actions github-actions bot added the module: metrics Metrics module label Oct 22, 2021
Copy link
Collaborator

@vfdev-5 vfdev-5 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @Chandan-h-509 , I left a comment.

Comment on lines 140 to 145
def process_function(engine, batch):
y_pred, y = batch
return y_pred, y
engine = Engine(process_function)
metric = Accuracy()
metric.attach(engine, 'accuracy')
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could create a default evaluator as above and hide this part of code from the user ?

cc @ydcjeff

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel it would be good to have this part of the code so that it remains transparent.
Could you also explain the alternative that you are proposing?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a way to define a global context using this:

ignite/docs/source/conf.py

Lines 344 to 353 in 74dabca

doctest_global_setup = """
import torch
from torch import nn, optim
from ignite.engine import *
from ignite.handlers import *
from ignite.metrics import *
from ignite.utils import *
manual_seed(666)

@Chandan-h-509
Copy link
Contributor Author

@vfdev-5 I have made the necessary change.
Kindly review



metric_name='accuracy'
engine=evaluator(Accuracy(), metric_name)
Copy link
Collaborator

@vfdev-5 vfdev-5 Oct 24, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Chandan-h-509 thanks for the update, but (here I agree with your previous comment) it is non-obvious to understand what it is evaluator with its particular API as evaluator(Accuracy(), metric_name). I think a tradeoff would be something like

            metric = Accuracy()
            metric.attach(evaluator, "accuracy")

            preds = torch.tensor([[1,0,0,1], ])
            target = torch.tensor([[1,0,0,0], ])
            state = evaluator.run([[preds, target], ])
            print(state.metrics["accuracy"])

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In addition, I think it would be more helpful to show all possible options while computing accuracy:

  1. binary case
  2. multiclass case
  3. multilabel case

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Chandan-h-509 thanks for the update, but (here I agree with your previous comment) it is non-obvious to understand what it is evaluator with its particular API as evaluator(Accuracy(), metric_name). I think a tradeoff would be something like

            metric = Accuracy()
            metric.attach(evaluator, "accuracy")

            preds = torch.tensor([[1,0,0,1], ])
            target = torch.tensor([[1,0,0,0], ])
            state = evaluator.run([[preds, target], ])
            print(state.metrics["accuracy"])

@vfdev-5 I didn't understand the solution clearly.
Could you please elaborate more on it.

Copy link
Collaborator

@vfdev-5 vfdev-5 Oct 26, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Chandan-h-509 current docstring you proposing in this PR does not look good especially this line:

engine=evaluator(Accuracy(), metric_name)

because it is not obvious to understand what it is evaluator method and what is its API.
I was proposing you to rework the docstring and doctest_global_setup such that the new docstring ressembles to https://github.com/pytorch/ignite/pull/2287/files#r735168912

Additionally, you need to add more docstrings for other cases as described https://github.com/pytorch/ignite/pull/2287/files#r735169293

@sdesrozis
Copy link
Contributor

@Chandan-h-509 Thank you for this PR. Are you still working on it ?

@Chandan-h-509
Copy link
Contributor Author

@sdesrozis sorry for the delay.. I have been very busy these last 20days with my college exams and other activities.. I will try to complete it in a few days time.

@sdesrozis
Copy link
Contributor

@Chandan-h-509 Thank you for your help. I close this PR since a doctest for Accuracy has been merged.

@sdesrozis sdesrozis closed this Dec 12, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
module: metrics Metrics module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants