Evaluation tools for Retrieval-augmented Generation (RAG) methods.
Rageval is a tool that helps you evaluate RAG system. The evaluation consists of six sub-tasks, including query rewriting, document ranking, information compression, evidence verify, answer generating, and result validating.
The generate task is to answer the question based on the contexts provided by retrieval modules in RAG. Typically, the context could be extracted/generated text snippets from the compressor, or relevant documents from the re-ranker. Here, we divide the metrics used in the generate task into two categories, namely answer correctness and answer groundedness.
(1) Answer Correctness: this category of metrics is to evaluate the correctness by comparing the generated answer with the groundtruth answer. Here are some commonly used metrics:
- Answer NLI Correctness: also know as claim recall in the paper (Tianyu et al.).
- Answer EM Correctness: also know as Exact Match as used in ASQA (Ivan Stelmakh et al.).
(2) Answer Groundedness: this category of metrics is to evaluate the groundedness (also known as factual consistency) by comparing the generated answer with the provided contexts. Here are some commonly used metrics:
answer_citation_precision ("answer_citation_precision")answer_citation_recall ("answer_citation_recall")
The rewrite task is to reformulate user question into a set of queries, making them more friendly to the search module in RAG.
git clone https://github.com/gomate-community/rageval.git
cd rageval
python setup.py install
import rageval as rl
test_set = rl.datasets.load_data('ALCE', task='')
metric = rl.metrics.ContextRecall()
model = rl.models.OpenAILLM()
metric.init_model(model)
results = metric._score_batch(teset_set)
Please make sure to read the Contributing Guide before creating a pull request.
This project is currently at its preliminary stage.