The Self-Hosted AI Starter Kit is a Docker Compose template designed to quickly bootstrap a fully featured local AI and low-code development environment. This setup includes essential services such as N8N for workflow automation, PostgreSQL for data storage, Ollama for local LLMs, Qdrant for vector storage, and more.
- Docker and Docker Compose installed on your machine.
- Basic knowledge of Docker and command-line usage.
-
Clone the Repository:
git clone https://github.com/tazomatalax/Self-Hosted-AI-Starter-Kit.git cd self-hosted-ai-starter-kit
-
Create a
.env
File: Create a.env
file in the root directory and define the following variables:POSTGRES_USER=your_postgres_user POSTGRES_PASSWORD=your_postgres_password POSTGRES_DB=your_postgres_db N8N_ENCRYPTION_KEY=your_encryption_key N8N_USER_MANAGEMENT_JWT_SECRET=your_jwt_secret
-
Start the Services: If you are using a nvidia GPU, run the following command to start all services:
./run.sh
If you plan to use your PC's CPU, you will need to edit the run, stop, and update scripts to reflect that. Just uncomment the CPU profile commands and comment the GPU profile commands.
- Image:
n8nio/n8n:latest
- Ports:
5678:5678
- Description: A low-code platform for automating workflows.
- Image:
postgres:16-alpine
- Ports:
5432:5432
- Description: Database service for storing N8N data.
- Image:
ollama/ollama:latest
- Ports:
11434:11434
- Description: A platform for running local LLMs.
- Image:
qdrant/qdrant
- Ports:
6333:6333
- Description: High-performance vector storage.
- Image:
portainer/portainer-ce
- Ports:
9000:9000
- Description: Management interface for Docker containers.
- Image:
ghcr.io/gethomepage/homepage:latest
- Ports:
3333:3000
- Description: A customizable homepage for quick access to services.
- Access N8N at http://localhost:5678
- Access PostgreSQL at http://localhost:5432
- Access Ollama at http://localhost:11434
- Access Qdrant at http://localhost:6333
- Access Portainer at http://localhost:9000
- Access Homepage at http://localhost:3333
This script starts the Docker containers using the GPU profile:
#!/bin/bash
# Run docker-compose with the GPU profile
docker compose --profile gpu-nvidia up
# Run docker-compose with the CPU profile
# docker compose --profile cpu up
# Display the host IP and Homer URL
echo "Docker containers have been started."
echo "Access your services at: http://localhost:3333 (Homepage URL)"
This script updates the Docker containers:
#!/bin/bash
# Pull the latest images and recreate the containers with the GPU profile
docker compose --profile gpu-nvidia pull
docker compose create && docker compose --profile gpu-nvidia up -d
# Pull the latest images and recreate the containers with the CPU profile
# docker compose --profile cpu pull
# docker compose create && docker compose --profile cpu up
# Display the host IP and Homer URL
echo "Docker containers have been updated."
echo "Access your services at: http://localhost:3333 (Homepage URL)"
This script stops all running containers:
#!/bin/bash
docker compose --profile gpu-nvidia down
# docker compose --profile cpu down
To upgrade your services, run the following commands:
./update.sh
For issues or questions, please open an issue in the repository or join the community discussions.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.