An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
-
Updated
Nov 8, 2024 - Python
An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
LLM API Server , OpenAI 同时支持 ChatGLM3 ,Llama, Llama-3, Firefunction, Openfunctions ,BAAI/bge-m3 ,bge-large-zh-v1.5
C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)
Genshin Impact Character Instruction Models tuned by Lora on LLM
open-llms-next-web,一个类似于chatgpt-next-web的开源大型语言模型web演示,支持离线开源大模型和PEFT模型
一个小玩具demo:使用LangChain调用本地ChatGLM3-6B模型实现的搜索引擎agent
A Genshin Impact Book Question Answer Project supported by LLM
An spoken English chatbot runs in realtime and offline based on LLM.
This project accelerates local deployment of chatglm and vector inference using PyTorch compiled in C++, and includes an OpenAI API Mock script for quick setup of local speed testing services. This setup enhances performance and efficiency, ideal for high-performance applications and development testing.
Add a description, image, and links to the chatglm3 topic page so that developers can more easily learn about it.
To associate your repository with the chatglm3 topic, visit your repo's landing page and select "manage topics."