Stars
Utilities intended for use with Llama models.
CLiB中文大模型能力评测榜单(持续更新):目前已囊括195个大模型,覆盖chatgpt、gpt-4o、o3-mini、谷歌gemini、Claude3.5、智谱GLM-Zero、文心一言、qwen-max、百川、讯飞星火、商汤senseChat、minimax等商用模型, 以及DeepSeek-R1、deepseek-v3、qwen2.5、llama3.3、phi-4、glm4、书生int…
Tools for merging pretrained large language models.
An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.
Awesome LLM compression research papers and tools.
Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
Reference implementation for DPO (Direct Preference Optimization)
An implementation of model parallel autoregressive transformers on GPUs, based on the Megatron and DeepSpeed libraries
Provides end-to-end model development pipelines for LLMs and Multimodal models that can be launched on-prem or cloud-native.
[NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward
Awesome list for LLM quantization
Benchmarking LLMs with Challenging Tasks from Real Users
🦜🔗 Build context-aware reasoning applications
🐫 CAMEL: Finding the Scaling Law of Agents. The first and the best multi-agent framework. https://www.camel-ai.org
Instruction Tuning with GPT-4
LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath
A collection of awesome-prompt-datasets, awesome-instruction-dataset, to train ChatLLM such as chatgpt 收录各种各样的指令数据集, 用于训练 ChatLLM 模型。
A collection of open-source dataset to train instruction-following LLMs (ChatGPT,LLaMA,Alpaca)
Papers and Datasets on Instruction Tuning and Following. ✨✨✨
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
[ICLR 2024 Spotlight] FLASK: Fine-grained Language Model Evaluation based on Alignment Skill Sets
A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are continuously improving the project. Welcome to PR the works (p…
Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We also show you how to solve end to end problems using Llama mode…