A webui for ChatGLM made by THUDM. chatglm-6b
- Original Chat like chatglm-6b's demo, but use Gradio Chatbox for better user experience.
- One click install script (but you still must install python)
- More parameters that can be freely adjusted
- Convenient save/load dialog history, presets
- Custom maximum context length
- Save to Markdown
- Use program arguments to specify model and caculation accuracy
python3.10
pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 --extra-index-url https://download.pytorch.org/whl/cu117
pip install --upgrade -r requirements.txt
or
bash install.sh
python webui.py
--model-path
: specify model path. If this parameter is not specified manually, the default value is THUDM/chatglm-6b
. Transformers will automatically download model from huggingface.
--listen
: launch gradio with 0.0.0.0 as server name, allowing to respond to network requests
--port
: webui port
--share
: use gradio to share
--precision
: fp32(CPU only), fp16, int4(CUDA GPU only), int8(CUDA GPU only)
--cpu
: use cpu
--path-prefix
: url root path. If this parameter is not specified manually, the default value is /
. Using a path prefix of /foo/bar
enables ChatGLM-webui to serve from http://$ip:$port/foo/bar/
rather than http://$ip:$port/
.