Skip to content

A simple Docker Compose setup to self-host Ollama and Open WebUI. Run your own private LLMs with GPU acceleration (NVIDIA/AMD) and complete data privacy. Easy to integrate with other services like n8n.

License

Notifications You must be signed in to change notification settings

AiratTop/ollama-self-hosted

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Ollama Self-Hosted with Docker

ollama-self-hosted

This repository provides a docker-compose setup to run a self-hosted Ollama instance with the Open WebUI.

It is configured to connect to a shared Docker network, allowing easy integration with other services like n8n.

What’s included

  • Complete Privacy: Your data is processed locally and never leaves your machine.
  • Offline Access: Works without an internet connection after initial setup.
  • No Rate Limits or Costs: Use models as much as you want without API fees.
  • Ollama: Run and manage large language models locally.
  • Open WebUI: A user-friendly web interface for Ollama.
  • Multiple Hardware Profiles: Pre-configured for CPU, NVIDIA, and AMD GPUs.
  • Helper Scripts: Easy-to-use scripts for starting, restarting, and updating the services.

Installation

1. Clone the Repository

git clone https://github.com/AiratTop/ollama-self-hosted.git
cd ollama-self-hosted

2. Create the Shared Network

If you haven't already, create the shared Docker network. This allows other containers (like n8n) to communicate with Ollama.

docker network create shared_network

3. Configure the Profile

Edit the .env file to select the hardware profile for Ollama.

# .env
# Choose one of the available profiles: cpu, gpu-nvidia, gpu-amd
COMPOSE_PROFILES=cpu
  • cpu: For CPU-only inference.
  • gpu-nvidia: For NVIDIA GPUs. Requires the NVIDIA Container Toolkit.
  • gpu-amd: For AMD GPUs on Linux.

Running the Services

Once you have configured your profile in the .env file, you can use the helper scripts to manage the services.

  • Start:
    docker compose up -d
  • Restart:
    ./restart-docker.sh
  • Update:
    ./update-docker.sh

After starting the services, you can access the Open WebUI at http://localhost:8080.

webui

Connecting with n8n

This setup is part of a larger collection of self-hosted services designed to create a complete, private development stack. It is pre-configured to work seamlessly with projects like n8n-self-hosted.

Since both services are on the shared_network, you can connect to Ollama from your n8n "Ollama" node using http://ollama:11434 as the Base URL.

For other components like Qdrant, Caddy, and monitoring, see the list in the See Also section below.

Downloading Models

After starting the services, you can download models in two ways:

1. Using the Command Line

You can pull models directly from the command line using docker exec. Browse available models on the Ollama Library.

For example, to download the gemma3:1b model, run:

docker exec -it ollama ollama pull gemma3:1b

2. Using the Open WebUI

You can also download models through the web interface.

  1. Open the Open WebUI at http://localhost:8080.
  2. Open the "Models" dropdown in the top menu.
  3. In the "Pull a model" field, enter the name of the model you want to download (e.g., gemma3:1b) and click the download button.

See Also

Check out other self-hosted solutions:

License

This project is licensed under the MIT License - see the LICENSE file for details.


Author

AiratTop

About

A simple Docker Compose setup to self-host Ollama and Open WebUI. Run your own private LLMs with GPU acceleration (NVIDIA/AMD) and complete data privacy. Easy to integrate with other services like n8n.

Topics

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Sponsor this project

Languages