Skip to content
#

llm-evaluation-framework

Here is 1 public repository matching this topic...

MindTrial: Evaluate and compare AI language models (LLMs) on text-based tasks. Supports multiple providers (OpenAI, Google, Anthropic, DeepSeek), custom tasks in YAML, and HTML/CSV reports.

  • Updated Mar 29, 2025
  • Go

Improve this page

Add a description, image, and links to the llm-evaluation-framework topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the llm-evaluation-framework topic, visit your repo's landing page and select "manage topics."

Learn more