This repository provides a docker-compose setup to run a self-hosted Ollama instance with the Open WebUI.
It is configured to connect to a shared Docker network, allowing easy integration with other services like n8n.
- ✅ Complete Privacy: Your data is processed locally and never leaves your machine.
- ✅ Offline Access: Works without an internet connection after initial setup.
- ✅ No Rate Limits or Costs: Use models as much as you want without API fees.
- ✅ Ollama: Run and manage large language models locally.
- ✅ Open WebUI: A user-friendly web interface for Ollama.
- ✅ Multiple Hardware Profiles: Pre-configured for CPU, NVIDIA, and AMD GPUs.
- ✅ Helper Scripts: Easy-to-use scripts for starting, restarting, and updating the services.
git clone https://github.com/AiratTop/ollama-self-hosted.git
cd ollama-self-hostedIf you haven't already, create the shared Docker network. This allows other containers (like n8n) to communicate with Ollama.
docker network create shared_networkEdit the .env file to select the hardware profile for Ollama.
# .env
# Choose one of the available profiles: cpu, gpu-nvidia, gpu-amd
COMPOSE_PROFILES=cpucpu: For CPU-only inference.gpu-nvidia: For NVIDIA GPUs. Requires the NVIDIA Container Toolkit.gpu-amd: For AMD GPUs on Linux.
Once you have configured your profile in the .env file, you can use the helper scripts to manage the services.
- Start:
docker compose up -d
- Restart:
./restart-docker.sh
- Update:
./update-docker.sh
After starting the services, you can access the Open WebUI at http://localhost:8080.
This setup is part of a larger collection of self-hosted services designed to create a complete, private development stack. It is pre-configured to work seamlessly with projects like n8n-self-hosted.
Since both services are on the shared_network, you can connect to Ollama from your n8n "Ollama" node using http://ollama:11434 as the Base URL.
For other components like Qdrant, Caddy, and monitoring, see the list in the See Also section below.
After starting the services, you can download models in two ways:
You can pull models directly from the command line using docker exec. Browse available models on the Ollama Library.
For example, to download the gemma3:1b model, run:
docker exec -it ollama ollama pull gemma3:1bYou can also download models through the web interface.
- Open the Open WebUI at http://localhost:8080.
- Open the "Models" dropdown in the top menu.
- In the "Pull a model" field, enter the name of the model you want to download (e.g.,
gemma3:1b) and click the download button.
Check out other self-hosted solutions:
- postgresql-self-hosted: A simple and robust PostgreSQL setup.
- mysql-self-hosted: A self-hosted MySQL instance.
- clickhouse-self-hosted: High-performance columnar database for analytics.
- metabase-self-hosted: Self-hosted Metabase on Docker for business intelligence and analytics.
- qdrant-self-hosted: A vector database for AI applications.
- redis-self-hosted: A fast in-memory data store, often used as a cache or message broker.
- caddy-self-hosted: A modern, easy-to-use web server with automatic HTTPS.
- wordpress-self-hosted: Production-ready WordPress stack with MySQL, phpMyAdmin, and WP-CLI.
- n8n-self-hosted: Scalable n8n with workers, Caddy for auto-HTTPS, and backup scripts.
- monitoring-self-hosted: Self-hosted monitoring stack with Prometheus and Grafana.
- ollama-self-hosted: Ready-to-use solution for running Ollama with the Open WebUI on Docker.
- authentik-self-hosted: Authentik is a flexible, open-source Identity & Access Management (IAM) solution.
- gatus-self-hosted: Automated service health dashboard with a PostgreSQL backend and backup scripts.
- beszel-self-hosted: Ready-to-run Beszel hub + agent stack for monitoring your infrastructure.
This project is licensed under the MIT License - see the LICENSE file for details.
AiratTop
- Website: airat.top
- GitHub: @AiratTop
- Email: mail@airat.top
- Repository: ollama-self-hosted
