Skip to content

Commit

Permalink
merge to main
Browse files Browse the repository at this point in the history
  • Loading branch information
liyin2015 committed Jul 2, 2024
2 parents 7ef18fb + 3faf660 commit a82f500
Show file tree
Hide file tree
Showing 7 changed files with 859 additions and 33 deletions.
46 changes: 27 additions & 19 deletions docs/source/developer_notes/evaluation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -73,31 +73,39 @@ If you are interested in computing metrics such as accuracy, F1-score, ROUGE, BE
If you are particulay interested in evaluating RAG (Retrieval-Augmented Generation) pipelines, we have several metrics available in LightRAG to assess both the quality of the retrieved context and the quality of the final generated answer.

- :class:`RetrieverEvaluator <eval.evaluators.RetrieverEvaluator>`: This evaluator is used to evaluate the performance of the retriever component of the RAG pipeline. It has metric functions to compute the recall and context relevance of the retriever.
- :class:`AnswerMacthEvaluator <eval.evaluators.AnswerMacthEvaluator>`: This evaluator is used to evaluate the performance of the generator component of the RAG pipeline. It has metric functions to compute the exact match and fuzzy match accuracy of the generated answer.
- :class:`LLMasJudge <eval.evaluators.LLMasJudge>`: This evaluator uses an LLM to get the judgement of the predicted answer for a list of questions. The task description and the judgement query of the LLM judge can be customized. It has a metric function to compute the judgement score, which is the number of generated answers that are judged as correct by the LLM divided by the total number of generated answers.
- :class:`RetrieverRecall <eval.retriever_recall>`: This is used to evaluate the recall of the retriever component of the RAG pipeline.
- :class:`RetrieverRelevance <eval.retriever_relevance>`: This is used to evaluate the relevance of the retrieved context to the query.
- :class:`AnswerMatchAcc <eval.answer_match_acc>`: This calculates the exact match accuracy or fuzzy match accuracy of the generated answers by comparing them to the ground truth answers.
- :class:`LLMasJudge <eval.llm_as_judge>`: This uses an LLM to get the judgement of the generated answer for a list of questions. The task description and the judgement query of the LLM judge can be customized. It computes the judgement score, which is the number of generated answers that are judged as correct by the LLM divided by the total number of generated answers.

For example, you can use the following code snippet to compute the recall and relevance of the retriever component of the RAG pipeline for a single query.

.. code-block:: python
:linenos:
from eval.evaluators import RetrieverEvaluator
retrieved_context = "Apple is founded before Google." # Retrieved context
gt_context = ["Apple is founded in 1976.",
"Google is founded in 1998.",
"Apple is founded before Google."] # Ground truth context
retriever_evaluator = RetrieverEvaluator() # Initialize the RetrieverEvaluator
recall = retriever_evaluator.compute_recall_single_query(
retrieved_context, gt_context
) # Compute the recall of the retriever
relevance = retriever_evaluator.compute_context_relevance_single_query(
retrieved_context, gt_context
) # Compute the relevance of the retriever
print(f"Recall: {recall}, Relevance: {relevance}")
# Recall: 0.3333333333333333, Relevance: 1.0
For a more detailed instructions on how to use these evaluators to evaluate RAG pipelines, you can refer to the tutorial on :doc:`Evaluating a RAG Pipeline <../tutorials/eval_a_rag>`, where we provide a step-by-step guide on how to use these evaluators to evaluate a RAG pipeline on HotpotQA dataset.
from lightrag.eval import RetrieverRecall, RetrieverRelevance
retrieved_contexts = [
"Apple is founded before Google.",
"Feburary has 28 days in common years. Feburary has 29 days in leap years. Feburary is the second month of the year.",
]
gt_contexts = [
[
"Apple is founded in 1976.",
"Google is founded in 1998.",
"Apple is founded before Google.",
],
["Feburary has 28 days in common years", "Feburary has 29 days in leap years"],
]
retriever_recall = RetrieverRecall()
avg_recall, recall_list = retriever_recall.compute(retrieved_contexts, gt_contexts) # Compute the recall of the retriever
print(f"Recall: {avg_recall}, Recall List: {recall_list}")
# Recall: 0.6666666666666666, Recall List: [0.3333333333333333, 1.0]
retriever_relevance = RetrieverRelevance()
avg_relevance, relevance_list = retriever_relevance.compute(retrieved_contexts, gt_contexts) # Compute the relevance of the retriever
print(f"Relevance: {avg_relevance}, Relevance List: {relevance_list}")
# Relevance: 0.803030303030303, Relevance List: [1.0, 0.6060606060606061]
For a more detailed instructions on how build and evaluate RAG pipelines, you can refer to the use case on :doc:`Evaluating a RAG Pipeline <../tutorials/eval_a_rag>`.

If you intent to use metrics that are not available in the LightRAG library, you can also implement your own custom metric functions or use other libraries such as `RAGAS <https://docs.ragas.io/en/stable/getstarted/index.html>`_ to compute the desired metrics for evaluating RAG pipelines.

Expand Down
22 changes: 12 additions & 10 deletions lightrag/lightrag/eval/llm_as_judge.py
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@
class DefaultLLMJudge(Component):
__doc__ = r"""Demonstrate how to use an LLM/Generator to output True or False for a judgement query.
You can use any any of your template to adapt to more tasks and sometimes you can directly ask LLM to output a score in range [0, 1] instead of only True or False.
You can use any of your template to adapt to more tasks and sometimes you can directly ask LLM to output a score in range [0, 1] instead of only True or False.
A call on the LLM judge equalize to _compute_single_item method.
Expand Down Expand Up @@ -82,8 +82,8 @@ def call(
Args:
question (str): Question string.
pred_answer (str): Predicted answer string.
gt_answer (str): Ground truth answer string.
pred_answer (str): Predicted answer string.
judgement_query (str): Judgement query string.
Returns:
Expand Down Expand Up @@ -126,7 +126,7 @@ class LLMasJudge:
>>> judgement_query = "For the question, does the predicted answer contain the ground truth answer?"
>>> llm_judge = LLMasJudge()
>>> avg_judgement, judgement_list = llm_judge.compute(
questions, pred_answers, gt_answers, judgement_query
questions, gt_answers, pred_answers, judgement_query
)
>>> avg_judgement
2 / 3
Expand All @@ -143,28 +143,30 @@ def __init__(
def compute(
self,
questions: List[str],
pred_answers: List[str],
gt_answers: List[str],
pred_answers: List[str],
judgement_query: str,
) -> List[bool]:
r"""
Get the judgement of the predicted answer for a list of questions.
Args:
questions (List[str]): List of question strings.
pred_answers (List[str]): List of predicted answer strings.
gt_answers (List[str]): List of ground truth answer strings.
pred_answers (List[str]): List of predicted answer strings.
judgement_query (str): Judgement query string.
Returns:
List[bool]: Judgement results.
tuple:
- float: Average judgement score.
- List[bool]: Judgement results for each query.
"""
judgement_list = []
for question, pred_answer, gt_answer in zip(
questions, pred_answers, gt_answers
for question, gt_answer, pred_answer in zip(
questions, gt_answers, pred_answers
):
judgement = self.llm_evaluator(
question, pred_answer, gt_answer, judgement_query
question, gt_answer, pred_answer, judgement_query
)
judgement_list.append(judgement)

Expand All @@ -185,7 +187,7 @@ def compute(
)
llm_judge = LLMasJudge()
avg_judgement, judgement_list = llm_judge.compute(
questions, pred_answers, gt_answers, judgement_query
questions, gt_answers, pred_answers, judgement_query
)
print(avg_judgement)
print(judgement_list)
8 changes: 4 additions & 4 deletions lightrag/tests/test_evaluators.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,9 +13,9 @@ def test_answer_match_acc():
pred_answers = ["positive", "negative", "this is neutral"]
gt_answers = ["positive", "negative", "neutral"]
answer_match_acc = AnswerMatchAcc(type="exact_match")
result, result_list = answer_match_acc.compute(pred_answers, gt_answers)
assert result == 2 / 3
assert result_list == [1.0, 1.0, 0.0]
avg_acc, acc_list = answer_match_acc.compute(pred_answers, gt_answers)
assert avg_acc == 2 / 3
assert acc_list == [1.0, 1.0, 0.0]
answer_match_acc = AnswerMatchAcc(type="fuzzy_match")
avg_acc, acc_list = answer_match_acc.compute(pred_answers, gt_answers)
assert avg_acc == 1.0
Expand Down Expand Up @@ -79,7 +79,7 @@ def test_llm_as_judge():
)
llm_judge = LLMasJudge()
avg_judgement, judgement_list = llm_judge.compute(
questions, pred_answers, gt_answers, judgement_query
questions, gt_answers, pred_answers, judgement_query
)
assert avg_judgement == 2 / 3
assert judgement_list == [True, True, False]
Loading

0 comments on commit a82f500

Please sign in to comment.