🔥 A list of tools, frameworks, and resources for building AI web agents
-
Updated
Jul 25, 2025
🔥 A list of tools, frameworks, and resources for building AI web agents
An extensible benchmark for evaluating large language models on planning
A comprehensive set of LLM benchmark scores and provider prices.
[NeurIPS 2025] BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks and Defenses on Large Language Models
What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks
Benchmark that evaluates LLMs using 759 NYT Connections puzzles extended with extra trick words
How good are LLMs at chemistry?
Python SDK for experimenting, testing, evaluating & monitoring LLM-powered applications - Parea AI (YC S23)
Thematic Generalization Benchmark: measures how effectively various LLMs can infer a narrow or specific "theme" (category/rule) from a small set of examples and anti-examples, then detect which item truly fits that theme among a collection of misleading candidates.
Language Model for Mainframe Modernization
Develop reliable AI apps
[NeurIPS'25] MLLM-CompBench evaluates the comparative reasoning of MLLMs with 40K image pairs and questions across 8 dimensions of relative comparison: visual attribute, existence, state, emotion, temporality, spatiality, quantity, and quality. CompBench covers diverse visual domains, including animals, fashion, sports, and scenes
Awesome Mixture of Experts (MoE): A Curated List of Mixture of Experts (MoE) and Mixture of Multimodal Experts (MoME)
Training and Benchmarking LLMs for Code Preference.
The data and implementation for the experiments in the paper "Flows: Building Blocks of Reasoning and Collaborating AI".
Evaluating Deep Multimodal Reasoning in Vision-Centric Agentic Tasks
Restore safety in fine-tuned language models through task arithmetic
A minimalist benchmarking tool designed to test the routine-generation capabilities of LLMs.
Add a description, image, and links to the llms-benchmarking topic page so that developers can more easily learn about it.
To associate your repository with the llms-benchmarking topic, visit your repo's landing page and select "manage topics."