Skip to content

Commit

Permalink
Fix broken link to GEval paper in metrics-conversational-g-eval.mdx
Browse files Browse the repository at this point in the history
  • Loading branch information
j-mesnil authored Feb 3, 2025
1 parent 44adb90 commit 4c74233
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/docs/metrics-conversational-g-eval.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -74,5 +74,5 @@ The `ConversationalGEval` is an adapted version of [`GEval`](/docs/metrics-llm-e
Unlike regular `GEval` though, the `ConversationalGEval` takes the entire conversation history into account during evaluation.

:::tip
Similar to the original [G-Eval paper](<(https://arxiv.org/abs/2303.16634)>), the `ConversationalGEval` metric uses the probabilities of the LLM output tokens to normalize the score by calculating a weighted summation. This step was introduced in the paper to minimize bias in LLM scoring, and is automatically handled by `deepeval` (unless you're using a custom LLM).
Similar to the original [G-Eval paper](https://arxiv.org/abs/2303.16634), the `ConversationalGEval` metric uses the probabilities of the LLM output tokens to normalize the score by calculating a weighted summation. This step was introduced in the paper to minimize bias in LLM scoring, and is automatically handled by `deepeval` (unless you're using a custom LLM).
:::

0 comments on commit 4c74233

Please sign in to comment.