Skip to content

Latest commit

 

History

History
41 lines (27 loc) · 1.4 KB

README.md

File metadata and controls

41 lines (27 loc) · 1.4 KB

Rageval

Evaluation tools for Retrieval-augmented Generation (RAG) methods.

python workflow status codecov pydocstyle PEP8

Rageval is a tool that helps you evaluate RAG system. The evaluation consists of six sub-tasks, including query rewriting, document ranking, information compression, evidence verify, answer generating, and result validating.

Installation

git clone https://github.com/gomate-community/rageval.git
cd rageval
python setup.py install

Usage

import rageval as rl

test_set = rl.datasets.load_data('ALCE', task='')
metric = rl.metrics.ContextRecall()
model = rl.models.OpenAILLM()
metric.init_model(model)

results = metric._score_batch(teset_set)

Contribution

Please make sure to read the Contributing Guide before creating a pull request.

About

This project is currently at its preliminary stage.