This project provides a ready-to-use Docker Compose setup for running the DeepSeek-R1 language model locally, along with a simple web-based user interface.
- Local LLM Inference: Run DeepSeek-R1 models on your own hardware using Ollama.
- Web UI: Interact with the model via a browser-based interface in the
web/
directory. - Easy Model Management: Pull and manage model versions with Docker commands.
-
Start the Ollama Service
docker compose up -d ollama docker compose up -d web docker compose up -d open-webui
-
Pull the DeepSeek-R1 Model
Choose your desired model size (e.g.,
1.5b
):docker compose exec ollama ollama pull deepseek-r1:1.5b
-
Access the Web UI
- Custom UI http://localhost:6001
- Open WebUI http://localhost:6002
Open http://localhost:6002 in your browser to interact with the model.
-
Add new models
docker compose exec ollama ollama pull deepseek-r1:7b docker compose exec ollama ollama pull deepseek-coder:1.3b
.
├── docker-compose.yml # Docker Compose configuration for Ollama
├── ollama-models/ # Model storage (keys, manifests, blobs)
│ ├── id_ed25519
│ ├── id_ed25519.pub
│ └── models/
│ ├── blobs/ # Model weights and data
│ └── manifests/ # Model manifests
├── web/ # Web UI files
│ ├── index.html
│ ├── ollama.js
│ ├── showdown.min.js
│ └── style.css
└── README.md # Project documentation
- Model files are stored in
ollama-models/
. You can add or remove models as needed. - The web UI is static and communicates with the Ollama backend.
- For advanced configuration, edit docker-compose.yml.
- Check Ollama is runing on http://localhost:11434
- Custom Web UI is running on http://localhost:6001
- Open Web UI is running on http://localhost:6002
- Ollama Documentation
- DeepSeek-R1 Model Card
- https://dev.to/savvasstephnds/run-deepseek-locally-using-docker-2pdm
- https://platzi.com/blog/deepseek-r1-instalar-local/
- https://www.composerize.com/