Skip to content

Evaluation tools for Retrieval-augmented Generation (RAG) methods.

License

Notifications You must be signed in to change notification settings

gomate-community/rageval

Repository files navigation

Rageval

Evaluation tools for Retrieval-augmented Generation (RAG) methods.

python workflow status codecov pydocstyle PEP8

Rageval is a tool that helps you evaluate RAG system. The evaluation consists of six sub-tasks, including query rewriting, document ranking, information compression, evidence verify, answer generating, and result validating.

Definition of tasks and metrics

The generate task is to answer the question based on the contexts provided by retrieval modules in RAG. Typically, the context could be extracted/generated text snippets from the compressor, or relevant documents from the re-ranker. Here, we divide metrics used in the generate task into two categories, namely answer correctness and answer groundedness.

(1) Answer Correctness: this category of metrics is to evaluate the correctness by comparing the generated answer with the groundtruth answer. Here are some commonly used metrics:

(2) Answer Groundedness: this category of metrics is to evaluate the groundedness (also known as factual consistency) by comparing the generated answer with the provided contexts. Here are some commonly used metrics:

The rewrite task is to reformulate user question into a set of queries, making them more friendly to the search module in RAG.

The search task is to retrieve relevant documents from the knowledge base.

(1) Context Adequacy: this category of metrics is to evaluate the adequacy by comparing the retrieved documents with the groundtruth contexts. Here are some commonly used metrics:

(2) Context Relevance: this category of metrics is to evaluate the relevance by comparing the retrieved documents with the groundtruth answers. Here are some commonly used metrics:

Benchmark

<style id="readme_22070_Styles"> .xl6522070 {padding-top:1px; padding-right:1px; padding-left:1px; padding-bottom:1px; mso-ignore:padding; color:black; font-size:11.0pt; font-weight:400; font-style:normal; text-decoration:none; mso-font-charset:0; mso-number-format:General; text-align:center; vertical-align:middle; mso-background-source:auto; mso-pattern:auto; white-space:nowrap;} .xl6622070 {padding-top:1px; padding-right:1px; padding-left:1px; padding-bottom:1px; mso-ignore:padding; color:black; font-size:11.0pt; font-weight:400; font-style:normal; text-decoration:none; mso-font-charset:0; mso-number-format:"0\.0"; text-align:center; vertical-align:middle; mso-background-source:auto; mso-pattern:auto; white-space:nowrap;} </style>

ASQA dataset is a question-answering dataset that contains factoid questions and long-form answers. The benchmark evaluates the correctness of the answers in the dataset.

Model Method Metric
String EM Rouge L Disambig F1 D-R Score
gpt-3.5-turbo-instruct no-retrieval 33.8 30.2 30.7 30.5
mistral-7b no-retrieval 20.6 31.1 26.6 28.7
llama2-7b-chat no-retrieval 21.7 30.7 28.0 29.3
solar-10.7b-instruct no-retrieval 23.0 24.9 28.1 26.5

ALCE is a benchmark for Automatic LLMs' Citation Evaluation. ALCE contains three datasets: ASQA, QAMPARI, and ELI5.

Dataset Model Method Metric
retriever prompt MAUVE EM Recall Claim Recall Citation Recall Citation Precision
ASQA llama2-7b-chat GTR vanilla(5-psg) - 33.3 - 55.9 80.0
summary(5-psg) - - - - -
summary(10-psg) - - - - -
snippet(5-psg) - - - - -
snippet(10-psg) - - - - -
DPR snippet(10-psg) - - - - -
Oracle vanilla(5-psg) - - - - -
ELI5 llama2-7b-chat BM25 vanilla(5-psg) - - 11.5 26.6 74.5
summary(5-psg) - - - - -
summary(10-psg) - - - - -
snippet(5-psg) - - - - -
snippet(10-psg) - - - - -
Oracle vanilla(5-psg) - - - - -

Installation

git clone https://github.com/gomate-community/rageval.git
cd rageval
python setup.py install

Usage

import rageval as rl

test_set = rl.datasets.load_data('ALCE', task='')
metric = rl.metrics.ContextRecall()
model = rl.models.OpenAILLM()
metric.init_model(model)

results = metric._score_batch(teset_set)

Contribution

Please make sure to read the Contributing Guide before creating a pull request.

About

This project is currently at its preliminary stage.

About

Evaluation tools for Retrieval-augmented Generation (RAG) methods.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published