This repository contains a framework for the large language model debate and a method for deploying local large language models which can provide streaming response.
Requirements:
- Python >= 3. 8
- OpenAI API key (optional, for using GPT-3.5-turbo or GPT-4 as an LLM agent)
Install with pip:
pip install -r requirements.txt
To launch the demo on your local machine, you first pip install requirements, then git clone this repository to your local folder.
git clone https://github.com/HITsz-TMG/DebateArena.git
To get model response, you need model APIs. There are two classes of APIs. One is public API, the other is deployling local large language models. If you want to deploy local services, you need enough computing power. Currently we support 4 models(vicuna, baichuan2, llama2 and openchat). The services is based on flask.
python app_vicuna.py
python app_baichuan2.py
python app_llama2.py
python app_openchat.py
If you want to use GPT-3.5-turbo or GPT-4 as an LLM agent, you can add OPENAI_API_KEY in gradio_web_server_1.py.
import os
os.environ['OPENAI_API_KEY'] = 'YOUR OPENAI_API_KEY'
The demo is based on gradio.
python gradio_web_server_1.py
If you have any question, please feel free to contact me by e-mail: 22S051013@stu.hit.edu.com or submit your issue in the repository.