Skip to content

Commit db91bf2

Browse files
Fix docstring
1 parent 2b41176 commit db91bf2

File tree

2 files changed

+26
-26
lines changed

2 files changed

+26
-26
lines changed

ignite/metrics/precision.py

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -99,21 +99,21 @@ class Precision(_BasePrecisionRecall):
9999
form expected by the metric. This can be useful if, for example, you have a multi-output model and
100100
you want to compute the metric with respect to one of the outputs.
101101
average: available options are
102-
`False`: default option. For multicalss and multilabel
103-
inputs, per class and per label metric is returned. By calling `mean()` on the
104-
metric instance, the `macro` setting (which is unweighted average across
105-
classes or labels) is returned.
106-
`micro`: for multilabel input, every label of each sample is considered itself
107-
a sample then precision is computed. For binary and multiclass
108-
inputs, this is equivalent with `Accuracy`, so use that metric.
109-
`samples`: for multilabel input, at first, precision is computed
110-
on a per sample basis and then average across samples is
111-
returned. Incompatible with binary and multiclass inputs.
112-
`weighted`: for binary and multiclass input, it computes metric for each class then
113-
returns average of them weighted by support of classes (number of actual samples
114-
in each class). For multilabel input, it computes precision for each label then
115-
returns average of them weighted by support of labels (number of actual positive
116-
samples in each label).
102+
``False``: default option. For multicalss and multilabel
103+
inputs, per class and per label metric is returned. By calling `mean()` on the
104+
metric instance, the `macro` setting (which is unweighted average across
105+
classes or labels) is returned.
106+
``'micro'``: for multilabel input, every label of each sample is considered itself
107+
a sample then precision is computed. For binary and multiclass
108+
inputs, this is equivalent with `Accuracy`, so use that metric.
109+
``'samples'``: for multilabel input, at first, precision is computed
110+
on a per sample basis and then average across samples is
111+
returned. Incompatible with binary and multiclass inputs.
112+
``'weighted'``: for binary and multiclass input, it computes metric for each class then
113+
returns average of them weighted by support of classes (number of actual samples
114+
in each class). For multilabel input, it computes precision for each label then
115+
returns average of them weighted by support of labels (number of actual positive
116+
samples in each label).
117117
is_multilabel: flag to use in multilabel case. By default, value is False.
118118
device: specifies which device updates are accumulated on. Setting the metric's
119119
device to be the same as your ``update`` arguments ensures the ``update`` method is non-blocking. By

ignite/metrics/recall.py

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -26,17 +26,17 @@ class Recall(_BasePrecisionRecall):
2626
form expected by the metric. This can be useful if, for example, you have a multi-output model and
2727
you want to compute the metric with respect to one of the outputs.
2828
average: available options are
29-
`False`: default option. For multicalss and multilabel
30-
inputs, per class and per label metric is returned. By calling `mean()` on the
31-
metric instance, the `macro` setting (which is unweighted average across
32-
classes or labels) is returned.
33-
`micro`: for multilabel input, every label of each sample is considered itself
34-
a sample then recall is computed. For binary and multiclass inputs, this is
35-
equivalent with `Accuracy`, so use that metric.
36-
`samples`: for multilabel input, at first, recall is computed on a per sample
37-
basis and then average across samples is returned. Incompatible with
38-
binary and multiclass inputs.
39-
`Recall` does not have `weighted` option as there is in :class:`~ignite.metrics.Precision`,
29+
``False``: default option. For multicalss and multilabel
30+
inputs, per class and per label metric is returned. By calling `mean()` on the
31+
metric instance, the `macro` setting (which is unweighted average across
32+
classes or labels) is returned.
33+
``'micro'``: for multilabel input, every label of each sample is considered itself
34+
a sample then recall is computed. For binary and multiclass inputs, this is
35+
equivalent with `Accuracy`, so use that metric.
36+
``'samples'``: for multilabel input, at first, recall is computed on a per sample
37+
basis and then average across samples is returned. Incompatible with
38+
binary and multiclass inputs.
39+
`Recall` does not have `weighted` option as there is in :class:`~ignite.metrics.precision.Precision`,
4040
because for binary and multiclass input, weighted recall, micro recall and `Accuracy`
4141
are equivalent and for multilabel input, weighted recall is equivalent with the micro one.
4242
is_multilabel: flag to use in multilabel case. By default, value is False.

0 commit comments

Comments
 (0)