title | emoji | colorFrom | colorTo | sdk | app_file | sdk_version | pinned | python_version | license | short_description |
---|---|---|---|---|---|---|---|---|---|---|
Connect |
🔵 |
blue |
blue |
gradio |
app.py |
5.15.0 |
false |
3.12 |
mit |
Arena for playing Four-in-a-row between LLMs |
It has been great fun making this Arena and watching LLMs duke it out!
Quick links:
- The Live Arena courtesy of amazing HuggingFace Spaces
- The GitHub repo for the code
- My video walkthrough of the code
- My LinkedIn - I love connecting!
If you'd like to learn more about this:
- I have a best-selling intensive 8-week Mastering LLM engineering course that covers models and APIs, along with RAG, fine-tuning and Agents.
- I'm running a number of Live Events with O'Reilly and Pearson
- Clone the repo with
git clone https://github.com/ed-donner/connect.git
- Change to the project directory with
cd connect
- Create a python virtualenv with
python -m venv venv
- Activate your environment with either
venv\Scripts\activate
on Windows, orsource venv/bin/activate
on Mac/Linux - Then run
pip install -r requirements.txt
to install the packages
If you wish to experiment with the prototype, run jupyter lab
to launch the lab then look at the notebook prototype.ipynb.
To launch the app locally, run python app.py
Please create a file with the exact name .env
in the project root directory (connect).
You would typically use Notepad (Windows) or nano (Mac) for this.
If you're not familiar with setting up a .env file this way, ask ChatGPT! It will give much more eloquent instructions than me. 😂
Your .env file should contain the following; add whichever keys you would like to use.
OPENAI_API_KEY=sk-proj-...
ANTHROPIC_API_KEY=sk-ant-...
DEEPSEEK_API_KEY=sk...
GROQ_API_KEY=...
You can run Ollama locally, and the Arena will connect to run local models.
- Download and install Ollama from https://ollama.com noting that on a PC you might need to have administrator permissions for the install to work properly
- On a PC, start a Command prompt / Powershell (Press Win + R, type
cmd
, and press Enter). On a Mac, start a Terminal (Applications > Utilities > Terminal). - Run
ollama run llama3.2
or for smaller machines tryollama run llama3.2:1b
- If this doesn't work, you may need to run
ollama serve
in another Powershell (Windows) or Terminal (Mac), and try step 3 again