Skip to content

Commit

Permalink
Fix doc markdown
Browse files Browse the repository at this point in the history
Fixed documentation markdown remarks for
* MulticlassClassificationMetrics.LogLoss
* MulticlassClassificationMetrics.LogLossReduction

Signed-off-by: Robin Windey <ro.windey@gmail.com>
  • Loading branch information
R0Wi committed Mar 27, 2021
1 parent b02b6e1 commit f9cb9d6
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ public sealed class CalibratedBinaryClassificationMetrics : BinaryClassification
/// <remarks>
/// <format type="text/markdown"><![CDATA[
/// The log-loss reduction is scaled relative to a classifier that predicts the prior for every example:
/// $LogLossReduction = \frac{LogLoss(prior) - LogLoss(classifier)}{LogLoss(prior)}
/// $LogLossReduction = \frac{LogLoss(prior) - LogLoss(classifier)}{LogLoss(prior)}$
/// This metric can be interpreted as the advantage of the classifier over a random prediction.
/// For example, if the RIG equals 0.2, it can be interpreted as "the probability of a correct prediction is
/// 20% better than random guessing".
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ public sealed class MulticlassClassificationMetrics
/// <remarks>
/// <format type="text/markdown"><![CDATA[
/// The log-loss metric is computed as follows:
/// $LogLoss = - \frac{1}{m} \sum_{i = 1}^m log(p_i),
/// $LogLoss = - \frac{1}{m} \sum_{i = 1}^m log(p_i)$,
/// where $m$ is the number of instances in the test set and
/// $p_i$ is the probability returned by the classifier
/// of the instance belonging to the true class.
Expand All @@ -41,7 +41,7 @@ public sealed class MulticlassClassificationMetrics
/// <remarks>
/// <format type="text/markdown"><![CDATA[
/// The log-loss reduction is scaled relative to a classifier that predicts the prior for every example:
/// $LogLossReduction = \frac{LogLoss(prior) - LogLoss(classifier)}{LogLoss(prior)}
/// $LogLossReduction = \frac{LogLoss(prior) - LogLoss(classifier)}{LogLoss(prior)}$
/// This metric can be interpreted as the advantage of the classifier over a random prediction.
/// For example, if the RIG equals 0.2, it can be interpreted as "the probability of a correct prediction is
/// 20% better than random guessing".
Expand Down

0 comments on commit f9cb9d6

Please sign in to comment.