What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks
-
Updated
Jul 26, 2024 - Jupyter Notebook
What can Large Language Models do in chemistry? A comprehensive benchmark on eight tasks
BackdoorLLM: A Comprehensive Benchmark for Backdoor Attacks on Large Language Models
A comprehensive set of LLM benchmark scores and provider prices.
Python SDK for experimenting, testing, evaluating & monitoring LLM-powered applications - Parea AI (YC S23)
How good are LLMs at chemistry?
Language Model for Mainframe Modernization
CompBench evaluates the comparative reasoning of multimodal large language models (MLLMs) with 40K image pairs and questions across 8 dimensions of relative comparison: visual attribute, existence, state, emotion, temporality, spatiality, quantity, and quality. CompBench covers diverse visual domains, including animals, fashion, sports, and scenes.
The data and implementation for the experiments in the paper "Flows: Building Blocks of Reasoning and Collaborating AI".
Training and Benchmarking LLMs for Code Preference.
Develop reliable AI apps
Benchmark that evaluates LLMs using 436 NYT Connections puzzles
Restore safety in fine-tuned language models through task arithmetic
A minimalist benchmarking tool designed to test the routine-generation capabilities of LLMs.
Code and data for Koo et al's ACL 2024 paper "Benchmarking Cognitive Biases in Large Language Models as Evaluators"
Thematic Generalization Benchmark: measures how effectively various LLMs can infer a narrow or specific "theme" (category/rule) from a small set of examples and anti-examples, then detect which item truly fits that theme among a collection of misleading candidates.
Join 15k builders to the Real-World ML Newsletter ⬇️⬇️⬇️
A framework for evaluating the effectiveness of chain-of-thought reasoning in language models.
Awesome Mixture of Experts (MoE): A Curated List of Mixture of Experts (MoE) and Mixture of Multimodal Experts (MoME)
Add a description, image, and links to the llms-benchmarking topic page so that developers can more easily learn about it.
To associate your repository with the llms-benchmarking topic, visit your repo's landing page and select "manage topics."