This project sets up an Open WebUI interface with LiteLLM as a backend proxy for various AI models. It uses Docker Compose to orchestrate the services.
- Docker and Docker Compose installed on your system
- API keys for the AI models you want to use
-
Create a
.envfile in the project root with the following content:MASTER_KEY=your_master_key #required ANTHROPIC_API_KEY=your_anthropic_api_key OPENAI_API_KEY=your_openai_api_keyReplace
your_*_api_keywith your actual API keys. -
Ensure the
config.ymlfile is present in the project root. This file configures the available models for LiteLLM. Feel free to add more models here!
-
Start the services:
docker-compose up -d
-
Access the Open WebUI interface at
http://localhost:3000 -
Access LiteLLM OpenAPI page at
http://localhost:4000
- Open WebUI: Runs on port 3000, provides the user interface.
- LiteLLM: Runs on port 4000, acts as a proxy for various AI models.
The following models are configured in config.yml:
- gpt-4.1 (OpenAI)
- gpt-4.1-mini (OpenAI)
- gpt-4o (OpenAI)
- gpt-4o-mini (OpenAI)
- o3 (OpenAI)
- o3-mini (OpenAI)
- o4-mini (OpenAI)
- o4-mini-high (OpenAI)
This setup uses environment variables to manage API keys. It's recommended to ensure that your .env file is not committed to version control and is properly secured. Tool of choice is 1Password CLI.