🔥 Large Language Models(LLM) have taken the NLP community AI community the Whole World by storm. Here is a curated list of papers about large language models, especially relating to ChatGPT. It also contains frameworks for LLM training, tools to deploy LLM, courses and tutorials about LLM and all publicly available LLM checkpoints and APIs.
- [2023-07-01] Add some open-source models: Aquila, Chatglm2, Ultra-LM
- [2023-07-01] Add some deploying tools: vLLM, Text Generation Inference
- [2023-07-01] Add some great post about LLM from Yao Fu, Lilian and Andrej
- Add LLM data (Pretraining data/Instruction Tuning data/Chat data/RLHF data) ✨Contributions Wanted
Also check out the project that I am currently working on: nanoRWKV - The nanoGPT-style implementation of RWKV Language Model (an RNN with GPT-level LLM performance.)
If you're interested in the field of LLM, you may find the above list of milestone papers helpful to explore its history and state-of-the-art. However, each direction of LLM offers a unique set of insights and contributions, which are essential to understanding the field as a whole. For a detailed list of papers in various subfields, please refer to the following link (it is possible that there are overlaps between different subfields):
(:exclamation: We would greatly appreciate and welcome your contribution to the following list. ❗)
-
Analyse different LLMs in different fields with respect to different abilities
-
Hardware and software acceleration for LLM training and inference
-
Use LLM to do some really cool stuff
-
Augment LLM in different aspects including faithfulness, expressiveness, domain-specific knowledge etc.
-
Detect LLM-generated text from texts written by humans
-
Align LLM with Human Preference
-
Chain of thought—a series of intermediate reasoning steps—significantly improves the ability of large language models to perform complex reasoning.
-
Large language models (LLMs) demonstrate an in-context learning (ICL) ability, that is, learning from a few examples in the context.
-
A Good Prompt is Worth 1,000 Words
-
Finetune a language model on a collection of tasks described via instructions
There are three important steps for a ChatGPT-like LLM:
- Pre-training
- Instruction Tuning
- Alignment
The following list makes sure that all LLMs are compared apples to apples.
You may also find these leaderboards helpful:
- Open LLM Leaderboard - aims to track, rank and evaluate LLMs and chatbots as they are released.
- Chatbot Arena Leaderboard - a benchmark platform for large language models (LLMs) that features anonymous, randomized battles in a crowdsourced manner.
- AlpacaEval Leaderboard - An Automatic Evaluator for Instruction-following Language Models
Model | Size | Architecture | Access | Date | Origin | Model License1 |
---|---|---|---|---|---|---|
Switch Transformer | 1.6T | Decoder(MOE) | - | 2021-01 | Paper | - |
GLaM | 1.2T | Decoder(MOE) | - | 2021-12 | Paper | - |
PaLM | 540B | Decoder | - | 2022-04 | Paper | - |
MT-NLG | 530B | Decoder | - | 2022-01 | Paper | - |
J1-Jumbo | 178B | Decoder | api | 2021-08 | Paper | - |
OPT | 175B | Decoder | api | ckpt | 2022-05 | Paper | OPT-175B License Agreement |
BLOOM | 176B | Decoder | api | ckpt | 2022-11 | Paper | BigScience RAIL License v1.0 |
GPT 3.0 | 175B | Decoder | api | 2020-05 | Paper | - |
LaMDA | 137B | Decoder | - | 2022-01 | Paper | - |
GLM | 130B | Decoder | ckpt | 2022-10 | Paper | The GLM-130B License |
YaLM | 100B | Decoder | ckpt | 2022-06 | Blog | Apache 2.0 |
LLaMA | 65B | Decoder | ckpt | 2022-09 | Paper | Non-commercial bespoke license |
GPT-NeoX | 20B | Decoder | ckpt | 2022-04 | Paper | Apache 2.0 |
Falcon | 40B | Decoder | ckpt | 2023-05 | Homepage | Apache 2.0 |
UL2 | 20B | agnostic | ckpt | 2022-05 | Paper | Apache 2.0 |
鹏程.盘古α | 13B | Decoder | ckpt | 2021-04 | Paper | Apache 2.0 |
T5 | 11B | Encoder-Decoder | ckpt | 2019-10 | Paper | Apache 2.0 |
CPM-Bee | 10B | Decoder | api | 2022-10 | Paper | - |
rwkv-4 | 7B | RWKV | ckpt | 2022-09 | Github | Apache 2.0 |
GPT-J | 6B | Decoder | ckpt | 2022-09 | Github | Apache 2.0 |
GPT-Neo | 2.7B | Decoder | ckpt | 2021-03 | Github | MIT |
GPT-Neo | 1.3B | Decoder | ckpt | 2021-03 | Github | MIT |
Model | Size | Architecture | Access | Date | Origin | Model License1 |
---|---|---|---|---|---|---|
Flan-PaLM | 540B | Decoder | - | 2022-10 | Paper | - |
BLOOMZ | 176B | Decoder | ckpt | 2022-11 | Paper | BigScience RAIL License v1.0 |
InstructGPT | 175B | Decoder | api | 2022-03 | Paper | - |
Galactica | 120B | Decoder | ckpt | 2022-11 | Paper | CC-BY-NC-4.0 |
OpenChatKit | 20B | - | ckpt | 2023-3 | - | Apache 2.0 |
Flan-UL2 | 20B | Decoder | ckpt | 2023-03 | Blog | Apache 2.0 |
Gopher | - | - | - | - | - | - |
Chinchilla | - | - | - | - | - | - |
Flan-T5 | 11B | Encoder-Decoder | ckpt | 2022-10 | Paper | Apache 2.0 |
T0 | 11B | Encoder-Decoder | ckpt | 2021-10 | Paper | Apache 2.0 |
Alpaca | 7B | Decoder | demo | 2023-03 | Github | CC BY NC 4.0 |
Orca | 13B | Decoder | ckpt | 2023-06 | Paper | Non-commercial bespoke license |
Model | Size | Architecture | Access | Date | Origin |
---|---|---|---|---|---|
GPT 4 | - | - | - | 2023-03 | Blog |
ChatGPT | - | Decoder | demo|api | 2022-11 | Blog |
Sparrow | 70B | - | - | 2022-09 | Paper |
Claude | - | - | demo|api | 2023-03 | Blog |
The above tables coule be better summarized by this wonderful visualization from this survey paper:
- LLaMA - A foundational, 65-billion-parameter large language model. LLaMA.cpp Lit-LLaMA
- Alpaca - A model fine-tuned from the LLaMA 7B model on 52K instruction-following demonstrations. Alpaca.cpp Alpaca-LoRA
- Flan-Alpaca - Instruction Tuning from Humans and Machines.
- Baize - Baize is an open-source chat model trained with LoRA. It uses 100k dialogs generated by letting ChatGPT chat with itself.
- Cabrita - A portuguese finetuned instruction LLaMA.
- Vicuna - An Open-Source Chatbot Impressing GPT-4 with 90% ChatGPT Quality.
- Llama-X - Open Academic Research on Improving LLaMA to SOTA LLM.
- Chinese-Vicuna - A Chinese Instruction-following LLaMA-based Model.
- GPTQ-for-LLaMA - 4 bits quantization of LLaMA using GPTQ.
- GPT4All - Demo, data, and code to train open-source assistant-style large language model based on GPT-J and LLaMa.
- Koala - A Dialogue Model for Academic Research
- BELLE - Be Everyone's Large Language model Engine
- StackLLaMA - A hands-on guide to train LLaMA with RLHF.
- RedPajama - An Open Source Recipe to Reproduce LLaMA training dataset.
- Chimera - Latin Phoenix.
- WizardLM|WizardCoder - Family of instruction-following LLMs powered by Evol-Instruct: WizardLM, WizardCoder.
- CaMA - a Chinese-English Bilingual LLaMA Model.
- Orca - Microsoft's finetuned LLaMA model that reportedly matches GPT3.5, finetuned against 5M of data, ChatGPT, and GPT4
- BayLing - an English/Chinese LLM equipped with advanced language alignment, showing superior capability in English/Chinese generation, instruction following and multi-turn interaction.
- UltraLM - Large-scale, Informative, and Diverse Multi-round Chat Models.
- Guanaco - QLoRA tuned LLaMA
- BLOOM - BigScience Large Open-science Open-access Multilingual Language Model BLOOM-LoRA
- BLOOMZ&mT0 - a family of models capable of following human instructions in dozens of languages zero-shot.
- Phoenix
- T5 - Text-to-Text Transfer Transformer
- T0 - Multitask Prompted Training Enables Zero-Shot Task Generalization
- OPT - Open Pre-trained Transformer Language Models.
- UL2 - a unified framework for pretraining models that are universally effective across datasets and setups.
- GLM- GLM is a General Language Model pretrained with an autoregressive blank-filling objective and can be finetuned on various natural language understanding and generation tasks.
- ChatGLM-6B - ChatGLM-6B 是一个开源的、支持中英双语的对话语言模型,基于 General Language Model (GLM) 架构,具有 62 亿参数.
- ChatGLM2-6B - An Open Bilingual Chat LLM | 开源双语对话语言模型
- RWKV - Parallelizable RNN with Transformer-level LLM Performance.
- ChatRWKV - ChatRWKV is like ChatGPT but powered by my RWKV (100% RNN) language model.
- StableLM - Stability AI Language Models.
- YaLM - a GPT-like neural network for generating and processing text. It can be used freely by developers and researchers from all over the world.
- GPT-Neo - An implementation of model & data parallel GPT3-like models using the mesh-tensorflow library.
- GPT-J - A 6 billion parameter, autoregressive text generation model trained on The Pile.
- Dolly - a cheap-to-build LLM that exhibits a surprising degree of the instruction following capabilities exhibited by ChatGPT.
- Pythia - Interpreting Autoregressive Transformers Across Time and Scale
- Dolly 2.0 - the first open source, instruction-following LLM, fine-tuned on a human-generated instruction dataset licensed for research and commercial use.
- OpenFlamingo - an open-source reproduction of DeepMind's Flamingo model.
- Cerebras-GPT - A Family of Open, Compute-efficient, Large Language Models.
- GALACTICA - The GALACTICA models are trained on a large-scale scientific corpus.
- GALPACA - GALACTICA 30B fine-tuned on the Alpaca dataset.
- Palmyra - Palmyra Base was primarily pre-trained with English text.
- Camel - a state-of-the-art instruction-following large language model designed to deliver exceptional performance and versatility.
- h2oGPT
- PanGu-α - PanGu-α is a 200B parameter autoregressive pretrained Chinese language model develped by Huawei Noah's Ark Lab, MindSpore Team and Peng Cheng Laboratory.
- MOSS - MOSS是一个支持中英双语和多种插件的开源对话语言模型.
- Open-Assistant - a project meant to give everyone access to a great chat based large language model.
- HuggingChat - Powered by Open Assistant's latest model – the best open source chat model right now and @huggingface Inference API.
- StarCoder - Hugging Face LLM for Code
- MPT-7B - Open LLM for commercial use by MosaicML
- Falcon - Falcon LLM is a foundational large language model (LLM) with 40 billion parameters trained on one trillion tokens. TII has now released Falcon LLM – a 40B model.
- XGen - Salesforce open-source LLMs with 8k sequence length.
- baichuan-7B - baichuan-7B 是由百川智能开发的一个开源可商用的大规模预训练语言模型.
- Aquila - 悟道·天鹰语言大模型是首个具备中英双语知识、支持商用许可协议、国内数据合规需求的开源语言大模型。
- DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
- Megatron-DeepSpeed - DeepSpeed version of NVIDIA's Megatron-LM that adds additional support for several features such as MoE model training, Curriculum Learning, 3D Parallelism, and others.
- FairScale - FairScale is a PyTorch extension library for high performance and large scale training.
- Megatron-LM - Ongoing research training transformer models at scale.
- Colossal-AI - Making large AI models cheaper, faster, and more accessible.
- BMTrain - Efficient Training for Big Models.
- Mesh Tensorflow - Mesh TensorFlow: Model Parallelism Made Easier.
- maxtext - A simple, performant and scalable Jax LLM!
- Alpa - Alpa is a system for training and serving large-scale neural networks.
- GPT-NeoX - An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
-
FastChat - A distributed multi-model LLM serving system with web UI and OpenAI-compatible RESTful APIs.
-
SkyPilot - Run LLMs and batch jobs on any cloud. Get maximum cost savings, highest GPU availability, and managed execution -- all with a simple interface.
-
vLLM - A high-throughput and memory-efficient inference and serving engine for LLMs
-
Text Generation Inference - A Rust, Python and gRPC server for text generation inference. Used in production at HuggingFace to power LLMs api-inference widgets.
-
Haystack - an open-source NLP framework that allows you to use LLMs and transformer-based models from Hugging Face, OpenAI and Cohere to interact with your own data.
-
Sidekick - Data integration platform for LLMs.
-
LangChain - Building applications with LLMs through composability
-
wechat-chatgpt - Use ChatGPT On Wechat via wechaty
-
promptfoo - Test your prompts. Evaluate and compare LLM outputs, catch regressions, and improve prompt quality.
-
Agenta - Easily build, version, evaluate and deploy your LLM-powered apps.
- [Andrej Karpathy] State of GPT video
- [Hyung Won Chung] Instruction finetuning and RLHF lecture Youtube
- [Jason Wei] Scaling, emergence, and reasoning in large language models Slides
- [Susan Zhang] Open Pretrained Transformers Youtube
- [Ameet Deshpande] How Does ChatGPT Work? Slides
- [Yao Fu] 预训练,指令微调,对齐,专业化:论大语言模型能力的来源 Bilibili
- [Hung-yi Lee] ChatGPT 原理剖析 Youtube
- [Jay Mody] GPT in 60 Lines of NumPy Link
- [ICML 2022] Welcome to the "Big Model" Era: Techniques and Systems to Train and Serve Bigger Models Link
- [NeurIPS 2022] Foundational Robustness of Foundation Models Link
- [Andrej Karpathy] Let's build GPT: from scratch, in code, spelled out. Video|Code
- [DAIR.AI] Prompt Engineering Guide Link
- [邱锡鹏] 大型语言模型的能力分析与应用 Slides | Video
- [Philipp Schmid] Fine-tune FLAN-T5 XL/XXL using DeepSpeed & Hugging Face Transformers Link
- [HuggingFace] Illustrating Reinforcement Learning from Human Feedback (RLHF) Link
- [HuggingFace] What Makes a Dialog Agent Useful? Link
- [张俊林]通向AGI之路:大型语言模型(LLM)技术精要 Link
- [大师兄]ChatGPT/InstructGPT详解 Link
- [HeptaAI]ChatGPT内核:InstructGPT,基于反馈指令的PPO强化学习 Link
- [Yao Fu] How does GPT Obtain its Ability? Tracing Emergent Abilities of Language Models to their Sources Link
- [Stephen Wolfram] What Is ChatGPT Doing … and Why Does It Work? Link
- [Jingfeng Yang] Why did all of the public reproduction of GPT-3 fail? Link
- [Hung-yi Lee] ChatGPT (可能)是怎麼煉成的 - GPT 社會化的過程 Video
- [Keyvan Kambakhsh] Pure Rust implementation of a minimal Generative Pretrained Transformer code
- [DeepLearning.AI] ChatGPT Prompt Engineering for Developers Homepage
- [Princeton] Understanding Large Language Models Homepage
- [OpenBMB] 大模型公开课 主页
- [Stanford] CS224N-Lecture 11: Prompting, Instruction Finetuning, and RLHF Slides
- [Stanford] CS324-Large Language Models Homepage
- [Stanford] CS25-Transformers United V2 Homepage
- [Stanford Webinar] GPT-3 & Beyond Video
- [李沐] InstructGPT论文精读 Bilibili Youtube
- [陳縕儂] OpenAI InstructGPT 從人類回饋中學習 ChatGPT 的前身 Youtube
- [李沐] HELM全面语言模型评测 Bilibili
- [李沐] GPT,GPT-2,GPT-3 论文精读 Bilibili Youtube
- [Aston Zhang] Chain of Thought论文 Bilibili Youtube
- [MIT] Introduction to Data-Centric AI Homepage
-
A Stage Review of Instruction Tuning [2023-06-29] [Yao Fu]
-
LLM Powered Autonomous Agents [2023-06-23] [Lilian]
-
Why you should work on AI AGENTS! [2023-06-22] [Andrej Karpathy]
-
Google "We Have No Moat, And Neither Does OpenAI" [2023-05-05]
-
AI competition statement [2023-04-20] [petergabriel]
-
我的大模型世界观 [2023-04-23] [陆奇]
-
Prompt Engineering [2023-03-15] [Lilian]
-
Noam Chomsky: The False Promise of ChatGPT [2023-03-08][Noam Chomsky]
-
Is ChatGPT 175 Billion Parameters? Technical Analysis [2023-03-04][Owen]
-
Towards ChatGPT and Beyond [2023-02-20][知乎][欧泽彬]
-
追赶ChatGPT的难点与平替 [2023-02-19][李rumor]
-
对话旷视研究院张祥雨|ChatGPT的科研价值可能更大 [2023-02-16][知乎][旷视科技]
-
关于ChatGPT八个技术问题的猜想 [2023-02-15][知乎][张家俊]
-
ChatGPT发展历程、原理、技术架构详解和产业未来 [2023-02-15][知乎][陈巍谈芯]
-
对ChatGPT的二十点看法 [2023-02-13][知乎][熊德意]
-
ChatGPT-所见、所闻、所感 [2023-02-11][知乎][刘聪NLP]
-
The Next Generation Of Large Language Models [2023-02-07][Forbes]
-
Large Language Model Training in 2023 [2023-02-03][Cem Dilmegani]
-
What Are Large Language Models Used For? [2023-01-26][NVIDIA]
-
Large Language Models: A New Moore's Law [2021-10-26][Huggingface]
- LLMsPracticalGuide - A curated (still actively updated) list of practical guide resources of LLMs
- Awesome ChatGPT Prompts - A collection of prompt examples to be used with the ChatGPT model.
- awesome-chatgpt-prompts-zh - A Chinese collection of prompt examples to be used with the ChatGPT model.
- Awesome ChatGPT - Curated list of resources for ChatGPT and GPT-3 from OpenAI.
- Chain-of-Thoughts Papers - A trend starts from "Chain of Thought Prompting Elicits Reasoning in Large Language Models.
- Instruction-Tuning-Papers - A trend starts from
Natrural-Instruction
(ACL 2022),FLAN
(ICLR 2022) andT0
(ICLR 2022). - LLM Reading List - A paper & resource list of large language models.
- Reasoning using Language Models - Collection of papers and resources on Reasoning using Language Models.
- Chain-of-Thought Hub - Measuring LLMs' Reasoning Performance
- Awesome GPT - A curated list of awesome projects and resources related to GPT, ChatGPT, OpenAI, LLM, and more.
- Awesome GPT-3 - a collection of demos and articles about the OpenAI GPT-3 API.
- Awesome LLM Human Preference Datasets - a collection of human preference datasets for LLM instruction tuning, RLHF and evaluation.
- RWKV-howto - possibly useful materials and tutorial for learning RWKV.
- ModelEditingPapers - A paper & resource list on model editing for large language models.
- Awesome LLM Security - A curation of awesome tools, documents and projects about LLM Security.
- Arize-Phoenix - Open-source tool for ML observability that runs in your notebook environment. Monitor and fine tune LLM, CV and Tabular Models.
- Emergent Mind - The latest AI news, curated & explained by GPT-4.
- ShareGPT - Share your wildest ChatGPT conversations with one click.
- Major LLMs + Data Availability
- 500+ Best AI Tools
- Cohere Summarize Beta - Introducing Cohere Summarize Beta: A New Endpoint for Text Summarization
- chatgpt-wrapper - ChatGPT Wrapper is an open-source unofficial Python API and CLI that lets you interact with ChatGPT.
- Open-evals - A framework extend openai's Evals for different language model.
- Cursor - Write, edit, and chat about your code with a powerful AI.
- AutoGPT - an experimental open-source application showcasing the capabilities of the GPT-4 language model.
- OpenAGI - When LLM Meets Domain Experts.
- HuggingGPT - Solving AI Tasks with ChatGPT and its Friends in HuggingFace.
- EasyEdit - An easy-to-use framework to edit large language models.
- chatgpt-shroud - A Chrome extension for OpenAI's ChatGPT, enhancing user privacy by enabling easy hiding and unhiding of chat history. Ideal for privacy during screen shares.
This is an active repository and your contributions are always welcome!
I will keep some pull requests open if I'm not sure if they are awesome for LLM, you could vote for them by adding 👍 to them.
If you have any question about this opinionated list, do not hesitate to contact me chengxin1998@stu.pku.edu.cn.