Skip to content

microsoft/benchmark-qed

Repository files navigation

BenchmarkQED

👉 Microsoft Research Blog Post
👉 BenchmarkQED Docs

Overview

flowchart LR
    AutoQ["<span style='font-size:1.5em; color:black'><b>AutoQ</b></span><br>LLM synthesis of<br>local-global queries<br>for target datasets<br><br>Enables <i>repeatibility<i>"] -- creates<br>standard queries<br>for evaluation --> AutoE["<span style='font-size:1.5em; color:black'><b>AutoE</b></span><br>LLM evaluation that<br>compares answers<br>or checks assertions<br><br> Enables <i>scalability"]
    AutoE ~~~ AutoD["<span style='font-size:1.5em; color:black'><b>AutoD</b></span><br>LLM summarization<br>of datasets sampled<br>to a target structure <br><br> Enables <i>consistency"]
    AutoD -- curates<br>standard datasets <br> for evaluation --> AutoE
    AutoD -- creates standard dataset summaries query synthesis --> AutoQ
    style AutoQ fill:#a8d0ed,color:black,font-weight:normal
    style AutoE fill:#a8d0ed,color:black,font-weight:normal
    style AutoD fill:#a8d0ed,color:black,font-weight:normal
    linkStyle 0 stroke:#0077b6,stroke-width:2px
    linkStyle 2 stroke:#0077b6,stroke-width:2px
    linkStyle 3 stroke:#0077b6,stroke-width:2px
Loading

BenchmarkQED is a suite of tools designed for automated benchmarking of retrieval-augmented generation (RAG) systems. It provides components for query generation, evaluation, and dataset preparation to facilitate reproducible testing at scale.

  • AutoQ: Generates four classes of synthetic queries with variable data scope, ranging from local queries (answered using a small number of text regions) to global queries (requiring reasoning over large portions or the entirety of a dataset).
  • AutoE: Evaluates RAG answers by comparing them side-by-side on key metrics—relevance, comprehensiveness, diversity, and empowerment—using the LLM-as-a-Judge approach. When ground truth is available, AutoE can also assess correctness, completeness, and other custom metrics.
  • AutoD: Provides data utilities for sampling and summarizing datasets, ensuring consistent inputs for query synthesis.

In addition to the tools, we also release two datasets to support the development and evaluation of RAG systems:

  • Podcast Transcripts: Transcripts of 70 episodes of the Behind the Tech podcast series. This is an updated version of the podcast transcript dataset used in the GraphRAG paper.
  • AP News: A collection of 1,397 health-related news articles from the Associated Press.

Getting Started

Instructions for getting started can be found here.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

Privacy & Cookies

Microsoft Privacy Statement

About

Automated benchmarking of Retrieval-Augmented Generation (RAG) systems

Topics

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Packages

No packages published

Contributors 4

  •  
  •  
  •  
  •