Large Model
GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
为GPT/GLM等LLM大语言模型提供实用化交互接口,特别优化论文阅读/润色/写作体验,模块化设计,支持自定义快捷按钮&函数插件,支持Python和C++等项目剖析&自译解功能,PDF/LaTex论文翻译&总结功能,支持并行问询多种LLM模型,支持chatglm3等本地模型。接入通义千问, deepseekcoder, 讯飞星火, 文心一言, llama2, rwkv, claude2, m…
A modular RL library to fine-tune language models to human preferences
❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119
The code of IJCAI2022 paper, Declaration-based Prompt Tuning for Visual Question Answering
Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)
ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
JARVIS, a system to connect LLMs with ML community. Paper: https://arxiv.org/pdf/2303.17580.pdf
[ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models
Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR 2023
AutoGPT is the vision of accessible AI for everyone, to use and to build on. Our mission is to provide the tools, so that you can focus on what matters.
🤖 Assemble, configure, and deploy autonomous AI Agents in your browser.
Test-time Prompt Tuning (TPT) for zero-shot generalization in vision-language models (NeurIPS 2022))
Grounded SAM: Marrying Grounding DINO with Segment Anything & Stable Diffusion & Recognize Anything - Automatically Detect , Segment and Generate Anything
[CVPR 2023] Official repository of paper titled "MaPLe: Multi-modal Prompt Learning".
[ACL 2023] Delving into the Openness of CLIP
Easily compute clip embeddings and build a clip retrieval system with them
基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等
🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.
Official implementation of AAAI 2023 paper "Parameter-efficient Model Adaptation for Vision Transformers"
A research project for natural language generation, containing the official implementations by MSRA NLC team.
Stable Diffusion web UI
Reasoning in LLMs: Papers and Resources, including Chain-of-Thought, OpenAI o1, and DeepSeek-R1 🍓
The official GitHub page for the survey paper "A Survey of Large Language Models".
800,000 step-level correctness labels on LLM solutions to MATH problems