@@ -99,21 +99,21 @@ class Precision(_BasePrecisionRecall):
99
99
form expected by the metric. This can be useful if, for example, you have a multi-output model and
100
100
you want to compute the metric with respect to one of the outputs.
101
101
average: available options are
102
- `False`: default option. For multicalss and multilabel
103
- inputs, per class and per label metric is returned. By calling `mean()` on the
104
- metric instance, the `macro` setting (which is unweighted average across
105
- classes or labels) is returned.
106
- `micro`: for multilabel input, every label of each sample is considered itself
107
- a sample then precision is computed. For binary and multiclass
108
- inputs, this is equivalent with `Accuracy`, so use that metric.
109
- `samples`: for multilabel input, at first, precision is computed
110
- on a per sample basis and then average across samples is
111
- returned. Incompatible with binary and multiclass inputs.
112
- `weighted`: for binary and multiclass input, it computes metric for each class then
113
- returns average of them weighted by support of classes (number of actual samples
114
- in each class). For multilabel input, it computes precision for each label then
115
- returns average of them weighted by support of labels (number of actual positive
116
- samples in each label).
102
+ `` False` `: default option. For multicalss and multilabel
103
+ inputs, per class and per label metric is returned. By calling `mean()` on the
104
+ metric instance, the `macro` setting (which is unweighted average across
105
+ classes or labels) is returned.
106
+ ``' micro'` `: for multilabel input, every label of each sample is considered itself
107
+ a sample then precision is computed. For binary and multiclass
108
+ inputs, this is equivalent with `Accuracy`, so use that metric.
109
+ ``' samples'` `: for multilabel input, at first, precision is computed
110
+ on a per sample basis and then average across samples is
111
+ returned. Incompatible with binary and multiclass inputs.
112
+ ``' weighted'` `: for binary and multiclass input, it computes metric for each class then
113
+ returns average of them weighted by support of classes (number of actual samples
114
+ in each class). For multilabel input, it computes precision for each label then
115
+ returns average of them weighted by support of labels (number of actual positive
116
+ samples in each label).
117
117
is_multilabel: flag to use in multilabel case. By default, value is False.
118
118
device: specifies which device updates are accumulated on. Setting the metric's
119
119
device to be the same as your ``update`` arguments ensures the ``update`` method is non-blocking. By
0 commit comments