Your Complete AI Stack for Local Networks
40+ AI services accessible via IP:PORT - Perfect for learning, development, testing, and private team collaboration without domain/SSL complexity
Quick Start β’ Service Ports β’ Network Configuration β’ Troubleshooting
This is a port-based local network version of the AI LaunchKit that runs completely in Docker containers without requiring domain configuration, SSL certificates, or host system modifications.
Perfect for:
- π Learning & Experimentation - Safe environment to learn AI technologies
 - πΌ Professional Use - Private AI stack for teams and organizations
 - π§ͺ Development & Testing - Full-featured environment for building AI applications
 - π’ Local Deployment - Keep all data on-premises in your network
 - π₯ Team Collaboration - Share AI services with colleagues on same network
 
- β No Domains Required - Access via IP:PORT (e.g., 192.168.1.100:8000)
 - β No SSL Setup - HTTP-only for local network use
 - β No Caddy/Reverse Proxy - Direct port access to services
 - β No Host Modifications - Everything runs in Docker containers
 - β Local Network Ready - Designed for LAN access from multiple devices
 - β Production-Grade - Same services as cloud version, just local
 
- Server: Ubuntu 24.04 LTS (64-bit) with fresh installation
 - Resources:
- Minimum: 4 GB RAM, 2 CPU cores, 30GB disk
 - Recommended: 16+ GB RAM, 8+ CPU cores, 120GB disk
 
 - Network: Local network access (no internet domain needed)
 - Docker Management: Portainer will be installed automatically if not present
 
# Clone and install in one command
git clone https://github.com/heinrichhermann/ai-launchkit-local && cd ai-launchkit-local && sudo bash ./scripts/install_local.shInstallation time: 10-15 minutes (plus optional n8n workflows import)
The installer will:
- β Update system and install Docker
 - β Generate secure passwords and API keys
 - β Configure services for local network access
 - β Start selected services with port mappings
 - β Generate access report with all URLs
 
No domain prompts, no SSL setup, no host system changes!
Perfect for: People new to Linux, AI, or Docker Time needed: 30-45 minutes Experience level: No prior knowledge required!
β A Server or Computer running Ubuntu
- Minimum: 4 GB RAM, 2 CPU cores, 30GB disk
 - Recommended: 8+ GB RAM, 4+ CPU cores
 - Fresh Ubuntu 24.04 LTS installation
 
β Access to the Server
- SSH connection OR direct console access
 - Administrator (sudo) rights
 
β Internet Connection
- For downloading Docker images and services
 
β What You DON'T Need:
- Domain name or website
 - SSL certificates
 - Advanced Linux knowledge
 - Programming experience
 
What happens: You'll open a terminal and connect to your server
π Detailed Connection Instructions
Option A: Using SSH (Remote Connection)
If your server is in another room or location:
- 
Find your server's IP address
- Usually printed during Ubuntu installation
 - Or check your router's connected devices
 - Example: 
192.168.1.100 
 - 
Open Terminal/SSH Client:
- Windows: Download PuTTY or use Windows Terminal
 - Mac: Open Terminal app (β+Space, type "Terminal")
 - Linux: Open Terminal (Ctrl+Alt+T)
 
 - 
Connect via SSH:
ssh your-username@192.168.1.100
Replace:
your-usernamewith your Ubuntu username192.168.1.100with your server's IP
 - 
Enter your password when prompted
- You won't see characters while typing - this is normal!
 - Press Enter when done
 
 - 
You're connected! You should see:
username@server:~$ 
Option B: Direct Console Access
If you're sitting at the server:
- Login with your username and password
 - You'll see the command prompt:
username@server:~$ - You're ready!
 
What happens: One command downloads everything and starts the automated installation
Copy and paste this into your terminal:
git clone https://github.com/heinrichhermann/ai-launchkit-local && cd ai-launchkit-local && sudo bash ./scripts/install_local.shPress Enter, then type your password when asked.
π What This Command Does
Breaking down the command:
Part 1: git clone https://github.com/heinrichhermann/ai-launchkit-local
- Downloads the AI Learning Kit to your server
 - Creates folder: 
ai-launchkit-local 
Part 2: cd ai-launchkit-local
- Changes into that folder
 - All scripts are now accessible
 
Part 3: sudo bash ./scripts/install_local.sh
sudo= Run as administrator (needed for installation)bash= Execute the script./scripts/install_local.sh= The installation script
Why sudo?
- Installs system packages (Docker, etc.)
 - Configures firewall
 - Needs administrator privileges
 
What happens: The wizard asks you a few simple questions to customize your installation
π Question-by-Question Guide
Allow access from local network (recommended for LAN installation)?
Configure LAN access now? (Y/n):
What to do: β Press ENTER (=Yes)
What it means:
- Opens firewall ports for your local network
 - Devices on your network can access the services
 - Example: Access from your laptop, phone, or tablet
 
Technical details: Adds firewall rules for 192.168.x.x and 10.x.x.x networks
Choose services for your local network AI development stack.
[Checkbox menu appears]
What to do:
- Use ββ Arrow keys to move up/down
 - Press SPACEBAR to select/deselect services
 - Press ENTER when done
 
Recommended for beginners:
- β n8n (Workflow Automation) - Already selected
 - β flowise (AI Agent Builder) - Already selected
 - β monitoring (Grafana dashboards) - Already selected
 β οΈ Leave others unselected for now (you can add later)
What each service does:
- n8n: Build AI automation workflows (like IFTTT for AI)
 - Flowise: Create chatbots with drag-and-drop
 - monitoring: See system performance (RAM, CPU usage)
 
β
 Detected LAN IP address: 192.168.1.100
Use this IP address for network access? (Y/n):
What to do: β Press ENTER (=Yes)
What it means:
- Your services will be accessible at http://192.168.1.100:PORT
 - This IP was automatically detected
 - You can change it later if needed
 
Choose the hardware profile for Ollama local LLM runtime.
( ) CPU (Recommended for most users)
( ) NVIDIA GPU (Requires NVIDIA drivers)
( ) AMD GPU (Requires ROCm drivers)
What to do:
- β Select "CPU" and press ENTER (for most users)
 - Only choose GPU options if you have a powerful graphics card
 
What it means:
- CPU: Uses processor for AI (slower but works everywhere)
 - GPU: Uses graphics card (much faster but needs setup)
 
Import workflows? (y/n):
What to do: n and press ENTER (skip for now)
Why skip:
- Takes 30 minutes to import 300+ workflows
 - You probably want to explore first
 - You can import later anytime
 
If you choose yes:
- Installation continues in background
 - Check progress: 
docker logs n8n-import 
Admin Email: 
Admin Password:
What to do: β Enter your email and a strong password
What are these for:
- Grafana Login - View system monitoring dashboards (Port 8003)
 - Langfuse Login - Track LLM performance and usage (Port 8096)
 - Same credentials work for both services
 - This is YOUR admin account, not public-facing
 
Security note:
- Only accessible from your local network
 - No internet exposure
 - Strong password still recommended
 
Enter the number of n8n workers to run (default is 1):
What to do: β Press ENTER (use default: 1)
What it means:
- Workers process workflows in parallel
 - 1 worker is fine for learning
 - More workers = can run more workflows simultaneously
 
OpenAI API Key (press Enter to skip):
Anthropic API Key (press Enter to skip):
Groq API Key (press Enter to skip):
What to do: β Press ENTER 3 times (skip all)
What are these:
- Optional paid API keys for cloud AI services
 - Not needed for local AI with Ollama
 - You can add them later in the .env file
 
Installation Progress:
You'll now see:
========== STEP 1: System Preparation ==========
β
 System preparation complete!
========== STEP 2: Installing Docker ==========
β
 Docker installation complete!
========== STEP 3: Generating Local Network Configuration ==========
β
 Local network configuration complete!
[... more steps ...]
π AI LaunchKit Local Network Installation Complete!
Wait for all steps to complete (10-15 minutes)
What happens: All services are running, you can start using them immediately!
You'll see a summary like this:
π INSTALLATION SUCCESSFUL!
Your services are accessible at:
- n8n (Workflow Automation): http://192.168.1.100:8000
- Flowise (AI Agent Builder): http://192.168.1.100:8022
- Grafana (Monitoring): http://192.168.1.100:8003
- Mailpit (Email Testing): http://192.168.1.100:8071
What to do next:
- Open your browser (can be on any device!)
 - Navigate to n8n: 
http://YOUR-SERVER-IP:8000 - Create your admin account:
- Enter your email
 - Choose a strong password
 - Click "Next"
 
 - You're in! Welcome to n8n π
 
Let's build a simple AI workflow to test everything:
π Complete Beginner Tutorial
What you'll learn:
- How to create a workflow in n8n
 - How to connect to Ollama (local AI)
 - How to see AI responses
 
Steps:
- 
Open n8n:
http://YOUR-SERVER-IP:8000 - 
Click the big "+" button (Create New Workflow)
 - 
Click "Add first step"
- A panel opens on the right
 
 - 
Search for "Manual Trigger"
- Type "manual" in the search box
 - Click "Manual Trigger" when it appears
 - This lets you start the workflow with a button click
 
 - 
Click the "+" icon next to your Manual Trigger node
- This adds the next step
 
 - 
Search for "HTTP Request"
- Type "http" in the search box
 - Click "HTTP Request"
 
 - 
Configure the HTTP Request: Click on the HTTP Request node and fill in:
- 
Method: Select "POST" from dropdown
 - 
URL: Enter this exactly:
http://ollama:11434/api/generate - 
Authentication: Leave as "None"
 - 
Send Body: Toggle ON
 - 
Body Content Type: Select "JSON"
 - 
Specify Body: Select "Using JSON"
 - 
JSON: Paste this:
{ "model": "qwen2.5:7b-instruct-q4_K_M", "prompt": "Hello! Please introduce yourself and explain what you can do.", "stream": false } 
 - 
 - 
Click "Execute Workflow" button (top right)
- Wait 5-10 seconds for AI to respond
 
 - 
See the result!
- Click on the HTTP Request node
 - Look at the "Output" tab
 - You'll see the AI's response in JSON format
 
 
Congratulations! π You just created your first AI workflow!
What just happened:
- n8n sent a request to Ollama (your local AI)
 - Ollama processed your question
 - Ollama sent back an answer
 - You can now build on this to create complex automations!
 
Next steps:
- Try different prompts
 - Add more nodes (email, database, etc.)
 - Explore the 300+ workflow templates
 
π₯οΈ Can I install this on Windows or Mac directly?
No, not directly. The AI Kit requires Ubuntu Linux.
Your options:
Option 1: Virtual Machine (Easiest for Windows/Mac)
- Install VirtualBox (free)
 - Download Ubuntu 24.04 LTS
 - Create VM with 8GB RAM, 4 CPU cores, 60GB disk
 - Install Ubuntu in the VM
 - Follow installation guide above
 - Access services from your Windows/Mac browser
 
Option 2: Cloud VPS (No local hardware needed)
- Rent Ubuntu server from providers like:
- DigitalOcean ($12/month for 4GB RAM)
 - Linode/Akamai ($10/month for 4GB RAM)
 - Hetzner ($5/month for 4GB RAM - Europe)
 
 - Get SSH access details
 - Follow installation guide above
 - Access from anywhere
 
Option 3: Raspberry Pi (Cheap dedicated hardware)
- Raspberry Pi 4/5 with 8GB RAM ($80)
 - Install Ubuntu Server 24.04
 - Perfect for 24/7 operation
 - Low power consumption
 
β I got an error during installation, what should I do?
Don't worry! The system automatically cleans up on errors.
Steps to resolve:
- 
Read the error message carefully
- It shows exactly which step failed
 - Example: "Installation failed at: Docker Installation"
 
 - 
Check common issues:
Error: "Port already in use"
- Another program is using that port
 - Check what: 
sudo netstat -tuln | grep :8000 - Stop that program or choose different port
 
Error: "Out of memory" or "Cannot allocate memory"
- Server doesn't have enough RAM
 - Need minimum 4GB RAM
 - Upgrade server or select fewer services
 
Error: "Docker daemon not responding"
- Docker installation failed
 - Try: 
sudo systemctl status docker - Restart: 
sudo systemctl restart docker 
Error: "Permission denied"
- Not running with sudo
 - Use: 
sudo bash ./scripts/install_local.sh 
 - 
Try installation again:
cd ai-launchkit-local sudo bash ./scripts/install_local.sh- Previous failed installation was rolled back automatically
 - Safe to re-run
 
 - 
Still having issues?
- Post in GitHub Issues
 - Include: Error message, Ubuntu version, server specs
 - Community will help!
 
 
π± How do I access services from my phone, laptop, or tablet?
It's automatic! No configuration needed.
Steps:
- 
Make sure your device is on the same WiFi/network as the server
 - 
Find your server's IP (shown during installation)
- Example: 
192.168.1.100 
 - Example: 
 - 
Open any browser on your device
- Chrome, Safari, Firefox, Edge - all work!
 
 - 
Type in the address bar:
http://192.168.1.100:8000Replace
192.168.1.100with YOUR server IP - 
Done! Services work identically on all devices
 
Examples:
- From laptop: 
http://192.168.1.100:8000(n8n) - From phone: 
http://192.168.1.100:8022(Flowise) - From tablet: 
http://192.168.1.100:8003(Grafana) - From any device: 
http://192.168.1.100/(Dashboard) 
Tip: Bookmark these URLs on your devices!
ποΈ How do I uninstall everything?
Safe and simple uninstallation with automatic backup:
cd ai-launchkit-local
sudo bash ./scripts/uninstall_local.shWhat happens:
- 
Shows current status
- Lists all running services
 - Shows data volumes
 
 - 
Asks for confirmation
- Type 
yesto proceed - Anything else cancels
 
 - Type 
 - 
Offers backup (Recommended!)
- Creates backup of workflows, databases, volumes
 - Saves to: 
~/ai-launchkit-backup-YYYYMMDD-HHMMSS/ - Press ENTER to accept
 
 - 
Removes AI LaunchKit
- Stops all containers
 - Removes data (with your permission)
 - Cleans up Docker images
 
 - 
Preserves important stuff:
- β Docker (in case you use it for other things)
 - β Portainer (Docker management tool)
 - β Your project folder (can reinstall anytime)
 
 
After uninstall:
- Server is back to clean state
 - Can reinstall anytime: 
sudo bash ./scripts/install_local.sh - Or restore from backup
 
π§ What if I want to change the services later?
You can easily add or remove services:
- 
Stop all services:
cd ai-launchkit-local docker compose -p localai -f docker-compose.local.yml down - 
Edit the .env file:
nano .env
 - 
Find the line starting with:
COMPOSE_PROFILES= - 
Add or remove service names (comma-separated)
- Example: 
COMPOSE_PROFILES="n8n,flowise,monitoring,cpu,comfyui" - Available: n8n, flowise, bolt, openui, monitoring, cpu, gpu-nvidia, gpu-amd, calcom, baserow, nocodb, vikunja, leantime, and more
 
 - Example: 
 - 
Save and exit:
- Press 
Ctrl+X - Press 
Y(yes, save) - Press 
Enter 
 - Press 
 - 
Start services again:
docker compose -p localai -f docker-compose.local.yml up -d
 - 
Wait 2-3 minutes for new services to start
 
β‘ How do I stop or restart services?
Stop all services:
cd ai-launchkit-local
docker compose -p localai -f docker-compose.local.yml downStart all services:
cd ai-launchkit-local
docker compose -p localai -f docker-compose.local.yml up -dRestart a specific service (example: n8n):
docker compose -p localai -f docker-compose.local.yml restart n8nSee what's running:
docker psCheck service logs (if something doesn't work):
docker compose -p localai -f docker-compose.local.yml logs n8nReplace n8n with any service name
πΎ Where is my data stored?
All data is in Docker volumes:
List all volumes:
docker volume ls | grep localai_Your important data:
localai_n8n_storage- All your workflowslocalai_langfuse_postgres_data- All databaseslocalai_ollama_storage- Downloaded AI models
Backup data:
# During uninstall: Select "Yes" for backup
# Manual backup: Run the uninstall script and it creates automatic backupWhere are backups:
- Location: 
~/ai-launchkit-backup-YYYYMMDD-HHMMSS/ - Contains: workflows, databases, all volumes as .tar.gz files
 
π I'm stuck! Where can I get help?
Don't worry - help is available!
1. Check the logs:
cd ai-launchkit-local
docker compose -p localai -f docker-compose.local.yml logs2. Check specific service:
docker logs n8n
docker logs flowise
docker logs postgres3. Ask for help:
- 
GitHub Issues: Report a problem
- Include: Error message, what you were doing, Ubuntu version
 
 - 
Original AI LaunchKit: Main project
 - 
Community Forum: oTTomator Think Tank
 
4. Common commands for troubleshooting:
# Check if Docker is running
sudo systemctl status docker
# Check system resources
htop  # Press Q to exit
# Check disk space
df -h
# Check memory
free -h
# Restart Docker
sudo systemctl restart dockerAll services are accessible via http://SERVER_IP:PORT:
| Port | Service | Description | 
|---|---|---|
| 8000 | n8n | Workflow Automation Platform | 
| 8001 | PostgreSQL | Database (external access) | 
| 8002 | Redis | Cache Database (external access) | 
| 8003 | Grafana | Monitoring Dashboards | 
| 8004 | Prometheus | Metrics Collection | 
| 8005 | Node Exporter | System Metrics | 
| 8006 | cAdvisor | Container Monitoring | 
| 8007 | Portainer | Docker Management UI | 
| Port | Service | Description | 
|---|---|---|
| 8020 | Open WebUI | ChatGPT-like Interface | 
| 8021 | Ollama | Local LLM Runtime | 
| 8022 | Flowise | AI Agent Builder | 
| 8023 | bolt.diy | AI Web Development | 
| 8024 | ComfyUI | Image Generation | 
| 8025 | OpenUI | AI UI Component Generator | 
| 8026 | Qdrant | Vector Database | 
| 8027 | Weaviate | Vector Database with API | 
| 8028 | Neo4j | Graph Database | 
| 8029 | LightRAG | Graph-based RAG | 
| 8030 | RAGApp | RAG Interface | 
| 8031 | Letta | Agent Server | 
| Port | Service | Description | Setup Guide | 
|---|---|---|---|
| 8040 | Cal.com | Scheduling Platform | - | 
| 8047 | Baserow | Airtable Alternative | β Setup Guide | 
| 8048 | NocoDB | Smart Spreadsheet | - | 
| 8049 | Vikunja | Task Management | - | 
| 8050 | Leantime | Project Management | - | 
| Port | Service | Description | Setup Guide | 
|---|---|---|---|
| 8060 | Postiz | Social Media Manager | β Providers Setup | 
| 8062 | Kopia | Backup System | - | 
| Port | Service | Description | 
|---|---|---|
| 8071 | Mailpit Web UI | Email catcher for development & testing | 
Note: Mailpit captures ALL emails for learning purposes. No external email delivery.
| Port | Service | Description | Setup Guide | 
|---|---|---|---|
| 8080 | Whisper | Speech-to-Text | - | 
| 8081 | OpenedAI-Speech | Text-to-Speech | - | 
| 8082 | LibreTranslate | Translation Service | - | 
| 8083 | Scriberr | Audio Transcription | β Troubleshooting | 
| 8084 | Tesseract OCR | Text Recognition (Fast) | |
| 8085 | EasyOCR | Text Recognition (Quality) | |
| 8086 | Stirling-PDF | PDF Tools Suite | |
| 8087 | Chatterbox TTS | Advanced Text-to-Speech | |
| 8088 | Chatterbox UI | TTS Web Interface | |
| 8089 | SearXNG | Private Search Engine | |
| 8090 | Perplexica | AI Search Engine | |
| 8091 | Formbricks | Survey Platform | |
| 8092 | Metabase | Business Intelligence | |
| 8093 | Crawl4AI | Web Crawler | |
| 8094 | Gotenberg | Document Conversion | |
| 8095 | Python Runner | Custom Python Scripts | 
| Port | Service | Description | Setup Guide | 
|---|---|---|---|
| 8096 | Langfuse | LLM Performance Tracking | β Integration Guide | 
| 8097 | ClickHouse | Analytics Database | - | 
| 8098 | MinIO | Object Storage | - | 
| 8099 | MinIO Console | Storage Management | - | 
| Port | Service | Description | Setup Guide | 
|---|---|---|---|
| 8100 | Open Notebook | NotebookLM Alternative - Research Assistant | β Setup | 
| 8101 | Open Notebook API | REST API for Open Notebook | β TTS Integration | 
| Port | Service | Protocol | Description | 
|---|---|---|---|
| 7687 | Neo4j Bolt | TCP | Graph Database Protocol | 
This AI LaunchKit serves multiple purposes - from education to professional deployment. Here are practical scenarios for each service category:
Team AI Infrastructure:
- Deploy private AI services for your organization
 - No data leaves your network
 - Full control over models and data
 - Comply with data protection regulations
 
Development Environment:
- Build and test AI applications locally
 - Prototype before cloud deployment
 - Debug workflows in safe environment
 - Test different models and configurations
 
Business Automation:
- Automate internal processes with n8n
 - Build custom AI tools for your team
 - Create private knowledge bases with RAG
 - Process documents without external APIs
 
NotebookLM Alternative - Fully Local & Private!
Open Notebook is the standout feature for creating professional AI-powered podcasts and research:
- YouTube Videos β Multi-speaker AI podcast discussions
 - PDFs & Documents β Engaging audio summaries with analysis
 - Web Pages & Articles β Podcast episodes with AI hosts debating topics
 - Audio/Video Files β Transcribed, analyzed, and converted to podcasts
 
- Speech-to-Text: Faster Whisper with multi-language support
 - Text-to-Speech: OpenedAI Speech + German Thorsten voice (native pronunciation!)
 - LLM Processing: Ollama integration - works 100% offline
 - Cloud Option: Also supports OpenAI, Anthropic, Groq (16+ providers)
 
- Upload: Paste YouTube URL, upload PDF, or add audio file
 - Analyze: AI reads, understands, and structures content
 - Script: AI creates engaging multi-speaker podcast script
 - Generate: Choose 1-4 AI voices with different personalities
 - Download: Professional MP3 ready to publish
 
- Setup Guide - Configuration and first steps
 - TTS Integration - Speech services setup
 - Full Feature Guide - Advanced features and examples
 
Perfect for: Content creators, educators, researchers, and anyone wanting to transform written/video content into engaging audio format!
n8n - Workflow Automation Learning
- Beginner: Build your first "Hello World" workflow with 300+ templates
 - Intermediate: Connect Ollama LLM to process incoming webhooks and auto-respond
 - Advanced: Create multi-agent AI systems using tools, memory, and conditional logic
 
Ollama - Local LLM Experimentation
- Beginner: Run your first local AI model (qwen2.5:7b) and compare with GPT-4
 - Intermediate: Test different models for specific tasks (coding, translation, analysis)
 - Advanced: Fine-tune models and benchmark performance metrics
 
Flowise - AI Agent Builder
- Beginner: Build a chatbot using drag-and-drop nodes in 5 minutes
 - Intermediate: Create a RAG system that searches your documents using Qdrant
 - Advanced: Build autonomous agents with tool-calling and memory management
 
Open WebUI - Prompt Engineering Lab
- Beginner: Learn effective prompt engineering with instant feedback
 - Intermediate: Compare different models side-by-side for the same prompts
 - Advanced: Create custom model pipelines and share them with your team
 
Qdrant - Semantic Search Learning
- Beginner: Upload documents and perform your first vector similarity search
 - Intermediate: Build a "Chat with your PDFs" application using n8n
 - Advanced: Implement hybrid search combining keywords and semantic vectors
 
Weaviate - AI-Powered Recommendations
- Beginner: Import product data and get AI-generated recommendations
 - Intermediate: Build a content recommendation engine with custom schemas
 - Advanced: Implement multi-modal search across text, images, and metadata
 
LightRAG - Graph-Based Retrieval
- Beginner: Understand how knowledge graphs improve RAG accuracy
 - Intermediate: Build a question-answering system with relationship awareness
 - Advanced: Combine graph structure with vector embeddings for complex queries
 
Neo4j - Graph Database Mastery
- Beginner: Model real-world relationships (social networks, org charts)
 - Intermediate: Write Cypher queries to find patterns in connected data
 - Advanced: Build recommendation engines using graph algorithms
 
Cal.com - Scheduling Automation
- Beginner: Set up automated meeting scheduling with calendar sync
 - Intermediate: Create custom booking workflows with n8n webhooks
 - Advanced: Build AI-assisted meeting preparation with pre-call research
 
Baserow & NocoDB - No-Code Database Learning
- Beginner: Create your first database with forms and views in the browser
 - Intermediate: Connect to n8n workflows for automated data processing
 - Advanced: Build custom business applications with API integrations
 
Vikunja & Leantime - Project Management Workflows
- Beginner: Organize personal projects with Kanban boards and Gantt charts
 - Intermediate: Automate task creation from emails using n8n + Mailpit
 - Advanced: Build AI-powered project analysis and reporting systems
 
ComfyUI - Image Generation Pipelines
- Beginner: Generate your first AI image using pre-built workflows
 - Intermediate: Create custom node graphs for specific art styles
 - Advanced: Build automated image processing pipelines with batch operations
 
bolt.diy - AI Coding Assistant
- Beginner: Generate a complete web app from a simple prompt
 - Intermediate: Learn how AI assistants structure projects and write code
 - Advanced: Compare Claude, GPT-4, and Groq for different coding tasks
 
Whisper + TTS - Voice AI Learning
- Beginner: Transcribe audio files and convert text back to speech
 - Intermediate: Build voice-controlled workflows with n8n
 - Advanced: Create real-time voice translation systems
 
OCR Bundle - Document Processing
- Beginner: Extract text from images and PDFs automatically
 - Intermediate: Build automated invoice processing with n8n workflows
 - Advanced: Compare Tesseract (fast) vs EasyOCR (accurate) for different document types
 
LibreTranslate - Translation Experiments
- Beginner: Translate text in 20+ languages without external APIs
 - Intermediate: Build multilingual content workflows with n8n
 - Advanced: Compare neural translation quality across different language pairs
 
Perplexica & SearXNG - Search Engine Learning
- Beginner: Understand privacy-focused search without tracking
 - Intermediate: Build custom search APIs with filtering and ranking
 - Advanced: Create AI-enhanced research workflows combining search + LLM analysis
 
Pattern 1: n8n + Ollama + Qdrant Build a complete RAG system that:
- Indexes documents into Qdrant
 - Retrieves relevant context on questions
 - Uses Ollama to generate informed answers
 
Pattern 2: Whisper + LLM + TTS Create a voice assistant that:
- Transcribes speech with Whisper
 - Processes with local LLM
 - Responds with natural TTS
 
Pattern 3: Crawl4AI + LLM + Email Build a research assistant that:
- Crawls websites on schedule
 - Summarizes content with LLM
 - Emails digests via Mailpit
 
Pattern 4: Cal.com + n8n + LLM Create smart scheduling that:
- Receives booking webhooks
 - Analyzes meeting context with AI
 - Prepares briefing materials
 
Path 1: AI Automation Fundamentals (1-2 weeks)
- Set up n8n + Ollama + Mailpit
 - Build 5 basic workflows with templates
 - Create your first AI-powered automation
 
Path 2: RAG System Development (2-4 weeks)
- Learn vector databases with Qdrant
 - Build document ingestion pipelines
 - Create production-ready RAG applications
 
Path 3: Multi-Agent Systems (4-8 weeks)
- Master Flowise agent building
 - Implement tool-calling and memory
 - Build autonomous multi-agent workflows
 
Path 4: Voice AI Development (2-3 weeks)
- Learn transcription with Whisper
 - Process audio with LLMs
 - Generate natural speech responses
 
Start Small: Begin with 3-5 core services (n8n, Ollama, Flowise, Mailpit, Monitoring)
Progressive Complexity: Master one service before adding others
Documentation Everything: Use n8n's notes feature to document your learning
Experiment Safely: All services are isolated in Docker - break things and rebuild!
Monitor Performance: Use Grafana to understand resource usage patterns
Join Community: Share your learning projects in forums and Discord
During installation, the wizard will:
- β Auto-detect your server's LAN IP (e.g., 192.168.1.100)
 - β Configure SERVER_IP automatically in .env
 - β Set up firewall rules for LAN access (ports 8000-8099)
 
After installation, services are immediately accessible from ANY device:
http://192.168.1.100:8000  # n8n from laptop
http://192.168.1.100:8022  # Flowise from phone
http://192.168.1.100:8003  # Grafana from tablet
http://192.168.1.100:8071  # Email interface from any device
No manual configuration needed! Just open the URL from any device on your network.
If you declined LAN access during installation, services use localhost:
# Access only from server
http://127.0.0.1:8000  # n8n
http://127.0.0.1:8022  # Flowise  
http://127.0.0.1:8003  # GrafanaTo enable LAN access later:
- Find your LAN IP: 
ip addr show | grep 'inet ' | grep -v 127.0.0.1 - Update .env: 
sed -i 's/SERVER_IP=127.0.0.1/SERVER_IP=192.168.1.100/' .env - Restart: 
docker compose -p localai -f docker-compose.local.yml restart - Add firewall rules: 
sudo ufw allow from 192.168.0.0/16 to any port 8000:8099 
Check your current firewall configuration:
# View firewall status
sudo ufw status
# If LAN access wasn't configured during installation, add it manually:
sudo ufw allow from 192.168.0.0/16 to any port 8000:8099
sudo ufw allow from 10.0.0.0/8 to any port 8000:8099
sudo ufw reloadπ Click to see what the installation wizard configures automatically (Informational Only)
The installation wizard automatically handles all configuration:
1. Environment File (.env)
- β
 Created from 
.env.local.exampletemplate - β All passwords generated automatically (32+ characters)
 - β All settings configured by wizard
 - β You never need to edit .env manually during installation
 
2. Network Configuration
- β SERVER_IP auto-detected (e.g., 192.168.1.100)
 - β Wizard asks you to confirm the detected IP
 - β Automatically written to .env
 - β All services configured to use this IP
 
3. Service Selection
- β Interactive checkbox menu in wizard
 - β Use arrow keys to navigate
 - β Spacebar to select/deselect
 - β Automatically written to COMPOSE_PROFILES in .env
 
4. Ollama Hardware Selection
- β Choose CPU, NVIDIA GPU, or AMD GPU in wizard
 - β NVIDIA Container Toolkit installed automatically if GPU selected
 - β Automatically configured in .env
 
5. Mail Configuration
- β Mailpit configured automatically
 - β All services configured to use Mailpit
 - β No external mail server needed
 
6. Optional API Keys
- β Wizard asks if you want to add OpenAI, Anthropic, Groq keys
 - β Press Enter to skip (can add later)
 - β Automatically written to .env if provided
 
After installation completes, everything is ready to use! No manual configuration needed.
Only use this section if you want to make changes after installation.
π How to add or remove services
- You want to try additional services (e.g., add ComfyUI, bolt.diy)
 - You want to remove services you're not using
 - You want to enable GPU after starting with CPU
 
- 
Stop all services:
cd ~/ai-launchkit-local docker compose -p localai -f docker-compose.local.yml down
- This stops all containers safely
 - Your data is preserved in volumes
 
 - 
Open .env file:
nano .env
- nano is a simple text editor
 - Press arrow keys to navigate
 
 - 
Find the COMPOSE_PROFILES line:
COMPOSE_PROFILES="n8n,flowise,monitoring"- It's near the bottom of the file
 - Lists all active services
 
 - 
Edit the services:
- Add services: 
COMPOSE_PROFILES="n8n,flowise,monitoring,cpu,comfyui" - Remove services: Delete the service name
 - Separate with commas (no spaces!)
 
Available services:
- AI: n8n, flowise, bolt, openui, comfyui, cpu, gpu-nvidia, open-webui
 - RAG: qdrant, weaviate, neo4j, lightrag, ragapp
 - Learning: calcom, baserow, nocodb, vikunja, leantime
 - Tools: kopia, postiz, monitoring
 - Specialized: speech, ocr, libretranslate, stirling-pdf, searxng, perplexica
 
 - Add services: 
 - 
Save and exit:
- Press 
Ctrl+X - Press 
Y(yes, save changes) - Press 
Enter 
 - Press 
 - 
Start services with new configuration:
docker compose -p localai -f docker-compose.local.yml up -d
- Starts services with your new selection
 - Downloads new service images if needed
 - Takes 2-5 minutes
 
 - 
Verify services are running:
docker ps
- Lists all running containers
 - Check for your new services
 
 - 
Access new services:
- Open browser: 
http://SERVER-IP/ - Click on newly added services
 - May need to wait 1-2 minutes for initialization
 
 - Open browser: 
 
π How to change server IP address
- Your server got a new IP address
 - You want to access from different network
 - You initially skipped LAN configuration
 
- 
Find your server's current IP:
ip addr show | grep 'inet ' | grep -v 127.0.0.1
- Shows all network interfaces
 - Look for your LAN IP (e.g., 192.168.1.100)
 - Example output: 
inet 192.168.1.100/24 
 - 
Stop all services:
cd ~/ai-launchkit-local docker compose -p localai -f docker-compose.local.yml down
 - 
Edit .env file:
nano .env
 - 
Find SERVER_IP line:
SERVER_IP=192.168.1.100
- Usually near the bottom of the file
 
 - 
Change to new IP:
SERVER_IP=192.168.1.200 # Your new IP - 
Save and exit:
- Press 
Ctrl+X, thenY, thenEnter 
 - Press 
 - 
Restart all services:
docker compose -p localai -f docker-compose.local.yml up -d
 - 
Test access:
- Open browser: 
http://NEW-IP:8000 - Services should load with new IP
 - Update bookmarks on your devices
 
 - Open browser: 
 
π How to add AI API keys
- You skipped API keys during installation
 - You got new API keys
 - You want to use cloud AI services (OpenAI, Anthropic, Groq)
 
- 
Open .env file:
cd ~/ai-launchkit-local nano .env
 - 
Find the API key section:
# Optional AI API Keys OPENAI_API_KEY= ANTHROPIC_API_KEY= GROQ_API_KEY= - 
Add your keys:
OPENAI_API_KEY=sk-your-key-here ANTHROPIC_API_KEY=sk-ant-your-key-here GROQ_API_KEY=gsk-your-key-here
- Remove the trailing 
=and add your key - No quotes needed
 - Get keys from: OpenAI.com, Anthropic.com, Groq.com
 
 - Remove the trailing 
 - 
Save and exit:
- Press 
Ctrl+X, thenY, thenEnter 
 - Press 
 - 
Restart affected services:
docker compose -p localai -f docker-compose.local.yml restart n8n flowise bolt
- Only restarts services that use API keys
 - Faster than restarting everything
 
 - 
Test API keys:
- Open n8n: 
http://SERVER-IP:8000 - Create workflow with OpenAI node
 - API keys should work now
 
 - Open n8n: 
 
βοΈ How to change n8n worker count
- System has more CPU cores
 - Want to process workflows faster in parallel
 - Currently have performance issues
 
- 
Check current CPU cores:
nproc
- Shows number of CPU cores
 - Example output: 
8 
 - 
Edit .env file:
nano .env
 - 
Find N8N_WORKER_COUNT:
N8N_WORKER_COUNT=1
 - 
Change number:
N8N_WORKER_COUNT=4 # Use 4 workers- Recommendation: Use 50% of CPU cores
 - Don't exceed number of cores
 
 - 
Save and exit:
- Press 
Ctrl+X, thenY, thenEnter 
 - Press 
 - 
Restart n8n:
docker compose -p localai -f docker-compose.local.yml restart n8n-worker
 - 
Verify workers:
docker ps | grep n8n-worker- Should show multiple n8n-worker containers
 - One per worker count
 
 
If you need to remove AI LaunchKit from your system:
# Run the uninstall script
sudo bash ./scripts/uninstall_local.shThe uninstall script will:
- β Show current AI LaunchKit status
 - β Ask for confirmation before proceeding
 - β Offer to create backup (workflows, databases, volumes)
 - β Remove only AI LaunchKit containers and volumes
 - β Preserve Portainer (or install it if missing)
 - β Optionally keep or remove .env configuration
 
If you prefer manual removal:
# Stop all services
docker compose -p localai -f docker-compose.local.yml down
# Remove with volumes (β οΈ DATA LOSS!)
docker compose -p localai -f docker-compose.local.yml down -v
# Remove images (optional)
docker image prune -a -f --filter "label=com.docker.compose.project=localai"- β All AI LaunchKit containers (n8n, Flowise, Ollama, etc.)
 - β All data volumes (workflows, databases, uploaded files)
 - β AI LaunchKit Docker networks
 - β Unused AI LaunchKit Docker images
 
- β Portainer (Docker Management UI)
 - β Other Docker containers not part of AI LaunchKit
 - β Project directory and scripts (can reinstall anytime)
 - β Your .env configuration (optionally backed up)
 
# Start all services
docker compose -p localai -f docker-compose.local.yml up -d
# Stop all services  
docker compose -p localai -f docker-compose.local.yml down
# Restart specific service
docker compose -p localai -f docker-compose.local.yml restart n8n
# View service logs
docker compose -p localai -f docker-compose.local.yml logs n8n
# Check running services
docker ps
# Monitor resources
docker stats# Check all service health
./scripts/06_final_report_local.sh
# Test specific port
nc -z localhost 8000
# Check port usage
netstat -tuln | grep 80Automatic Update (Recommended):
# Run the update script
cd ai-launchkit-local
sudo bash ./scripts/update_local.shThe update script will:
- β Create automatic backup of your configuration
 - β Pull latest changes from GitHub
 - β Update all Docker images
 - β Restart services with new versions
 - β Perform health checks
 - β Provide rollback instructions if needed
 
Manual Update:
cd ai-launchkit-local
# Update repository
git pull origin main
# Update Docker images
docker compose -p localai -f docker-compose.local.yml pull
# Restart services
docker compose -p localai -f docker-compose.local.yml up -d
# Clean up old images
docker image prune -fWhat Gets Updated:
- β AI LaunchKit scripts and configurations
 - β Docker images for all services
 - β Landing page and templates
 - β Documentation
 
What Gets Preserved:
- β Your .env configuration (automatically backed up and restored)
 - β All data in Docker volumes
 - β Service selections and settings
 
- Access: http://SERVER_IP:8000
 - First login: Create admin account on first visit
 - Workflows: 300+ templates can be imported during installation
 - API: Internal services use 
http://n8n:5678 
- Access: http://SERVER_IP:8003
 - Login: admin / [Check GRAFANA_ADMIN_PASSWORD in .env]
 - Dashboards: Pre-configured for Docker monitoring
 - Data Sources: Prometheus, PostgreSQL
 
- Web UI: http://SERVER_IP:8071
 - SMTP: SERVER_IP:8070 (port 1025 internal)
 - Purpose: Captures all outgoing emails from services
 - No Auth: Open access for local network
 
- PostgreSQL: SERVER_IP:8001
 - Username: postgres
 - Password: Check POSTGRES_PASSWORD in .env
 - Databases: Multiple apps share this instance
 
- Services only accessible from local network
 - No external SSL certificates
 - No internet-facing endpoints
 - Docker network isolation between services
 
- Disabled by default for local network convenience
 - Each service has its own user management system
 - Passwords stored in .env file
 - No Basic Auth layers (unlike original)
 
If deploying to production network:
- Enable authentication on individual services
 - Configure SSL termination (nginx/Apache)
 - Restrict network access with firewall rules
 - Use strong passwords and API keys
 - Consider VPN access for remote users
 
# n8n Workflow Automation
curl http://127.0.0.1:8000
# Flowise AI Agent Builder  
curl http://127.0.0.1:8022
# Grafana Monitoring
curl http://127.0.0.1:8003# Replace 192.168.1.100 with your server's IP
curl http://192.168.1.100:8000  # n8n
curl http://192.168.1.100:8022  # Flowise
curl http://192.168.1.100:8003  # Grafana
# From browser on phone/laptop/tablet
http://192.168.1.100/         # Service Dashboard
http://192.168.1.100:8000     # n8n interface
http://192.168.1.100:8071     # Email interface// n8n webhook from external service
POST http://192.168.1.100:8000/webhook/your-webhook-id
// Ollama API call
POST http://192.168.1.100:8021/api/generate
{
  "model": "qwen2.5:7b-instruct-q4_K_M", 
  "prompt": "Hello world"
}
// Vector search with Qdrant
POST http://192.168.1.100:8026/collections/search- Purpose: Captures ALL emails sent by any service
 - Web Interface: http://SERVER_IP:8071
 - SMTP Server: SERVER_IP:8070 (internal port 1025)
 - Authentication: None needed (local network)
 - Storage: Emails stored in Docker volume (mailpit_data)
 
All services automatically use Mailpit:
SMTP_HOST=mailpit
SMTP_PORT=1025  
SMTP_USER=admin
SMTP_PASS=admin
SMTP_SECURE=false- Open any service that sends emails (n8n, Cal.com, etc.)
 - Trigger an email action
 - Check Mailpit web interface: http://SERVER_IP:8071
 - View email content, headers, attachments
 
Check port conflicts:
netstat -tuln | grep 80
# Look for ports in range 8000-8099Check Docker resources:
docker stats
free -h
df -hView service logs:
docker compose -p localai -f docker-compose.local.yml logs [service_name]Can't access from other devices:
- Check SERVER_IP in .env matches server's LAN IP
 - Verify firewall allows access:
sudo ufw status sudo ufw allow from 192.168.1.0/24 to any port 8000:8099
 - Test connectivity: 
telnet SERVER_IP 8000 
Services returning 404/502:
- Wait 2-3 minutes for services to fully start
 - Check service is running: 
docker ps | grep service_name - Check port binding: 
docker port container_name 
Services can't connect to PostgreSQL:
# Check PostgreSQL is running
docker ps | grep postgres
# Test connection from service
docker exec n8n nc -zv postgres 5432
# Check logs
docker logs postgresHigh memory usage:
# Reduce n8n workers
echo "N8N_WORKER_COUNT=1" >> .env
docker compose -p localai -f docker-compose.local.yml restart
# Disable resource-heavy services temporarily
docker compose -p localai -f docker-compose.local.yml stop comfyui langfuse-webSlow response times:
- Check available RAM: 
free -h - Monitor CPU: 
htop - Add swap: 
sudo fallocate -l 4G /swapfile && sudo swapon /swapfile 
n8n Not Accessible
# Check n8n container status
docker logs n8n --tail 50
# Check database connection
docker exec n8n nc -zv postgres 5432
# Restart n8n
docker compose -p localai -f docker-compose.local.yml restart n8nCommon causes:
- Database not ready (wait 2-3 minutes)
 - Workflow import still running (check 
docker logs n8n-import) - Port 8000 already in use
 
Flowise Not Loading
# Check Flowise logs
docker logs flowise --tail 50
# Verify port binding
docker port flowise
# Test direct access
curl http://localhost:8022Common causes:
- Container still initializing (wait 1-2 minutes)
 - Port conflict on 8022
 - Missing environment variables
 
Email Not Working
# Check Mailpit is running
docker ps | grep mailpit
# Test SMTP connection
docker exec n8n nc -zv mailpit 1025
# Check Mailpit logs
docker logs mailpitEmail test:
- Open n8n: http://SERVER_IP:8000
 - Create simple workflow: Manual Trigger β Send Email
 - Execute workflow
 - Check emails: http://SERVER_IP:8071
 
Minimal (n8n + Flowise + Monitoring):
- RAM: 4GB
 - CPU: 2 cores
 - Services: ~8 containers
 
Standard (+ Business Tools):
- RAM: 8GB
 - CPU: 4 cores
 - Services: ~15 containers
 
Full Stack (All Services):
- RAM: 16GB+
 - CPU: 8 cores
 - Services: 40+ containers
 
Reduce n8n workers:
echo "N8N_WORKER_COUNT=1" >> .envOptimize Baserow:
# Already configured in docker-compose:
BASEROW_RUN_MINIMAL=yes
BASEROW_AMOUNT_OF_WORKERS=1Limit LibreTranslate models:
echo "LIBRETRANSLATE_LOAD_ONLY=en,de,fr" >> .envDisable telemetry: Services have telemetry disabled by default for privacy and performance.
If you have an existing domain-based AI LaunchKit installation:
# Export n8n workflows
docker exec n8n n8n export:workflow --backup --output=/backup/workflows.json
# Backup databases
docker exec postgres pg_dumpall -U postgres > backup.sql
# Backup volumes
docker run --rm -v localai_n8n_storage:/data -v $(pwd):/backup alpine \
  tar czf /backup/n8n_backup.tar.gz /data- Deploy this local version on a new server
 - Import workflows via n8n interface
 - Restore database data if needed
 - Update any hardcoded URLs in workflows
 - Test all integrations with new IP:PORT format
 
- Original Project: AI LaunchKit
 - Local Version: This README
 - Service URLs: Generated 
LOCAL_ACCESS_URLS.txt 
- Check Logs: 
docker compose logs [service_name] - Service Status: 
docker ps - Resource Usage: 
docker stats - Port Conflicts: 
netstat -tuln | grep 80 
- GitHub: Report local network issues
 - Original: AI LaunchKit issues
 
- Forum: oTTomator Think Tank
 - Discord: Join the AI development community
 
ai-launchkit-local/
βββ docker-compose.local.yml        # Port-based service definitions
βββ .env.local.example              # Local network configuration template
βββ scripts/
β   βββ install_local.sh            # Main installation script
β   βββ 01_system_preparation.sh    # System setup & firewall
β   βββ 02_install_docker.sh        # Docker installation
β   βββ 02a_install_nvidia_toolkit.sh # NVIDIA GPU support
β   βββ 03_generate_secrets_local.sh # Password generation
β   βββ 04_wizard_local.sh          # Interactive service selection
β   βββ 04a_setup_perplexica.sh     # Perplexica AI search setup
β   βββ 04b_setup_german_voice.sh   # German TTS voice auto-install
β   βββ 05_run_services_local.sh    # Service startup
β   βββ 06_final_report_local.sh    # Installation summary
β   βββ update_local.sh             # Update all services
β   βββ uninstall_local.sh          # Safe uninstall with backup
β   βββ generate_landing_page.sh    # Generate service dashboard
β   βββ utils.sh                    # Shared utility functions
βββ docs/                           # Complete documentation
β   βββ OPEN_NOTEBOOK_GUIDE.md      # Open Notebook features & use cases
β   βββ OPEN_NOTEBOOK_SETUP.md      # Open Notebook configuration
β   βββ OPEN_NOTEBOOK_TTS_INTEGRATION.md # Speech services setup
β   βββ QDRANT_SETUP.md             # Vector database API key setup
β   βββ GOTENBERG_GUIDE.md          # Document conversion API
β   βββ LANGFUSE_OLLAMA_INTEGRATION.md # LLM tracking setup
β   βββ [service-specific guides]/  # Individual service documentation
βββ templates/
β   βββ landing-page.html           # Auto-generated service dashboard
β   βββ voice_to_speaker.yaml       # German TTS voice configuration
βββ n8n/
β   βββ backup/workflows/           # 300+ pre-built n8n workflows
β   βββ n8n_import_script.sh        # Workflow import automation
βββ grafana/
β   βββ dashboards/                 # Pre-configured monitoring dashboards
β   βββ provisioning/               # Auto-configuration for data sources
βββ prometheus/
β   βββ prometheus.yml              # Metrics collection configuration
βββ shared/                         # Shared files between services
βββ media/                          # Media processing workspace
βββ [runtime directories]/          # Created during installation
    βββ open-notebook/              # Research assistant data
    βββ openedai-voices/            # TTS voice models
    βββ openedai-config/            # TTS configuration
    βββ perplexica/                 # AI search engine (git cloned)
    βββ website/                    # Generated landing page
Note: Runtime directories are created automatically during installation and excluded via .gitignore.
Add custom ports:
# In docker-compose.local.yml
services:
  my-service:
    image: my-app:latest
    ports:
      - "8999:3000"  # Use ports outside main rangeCustom environment:
# In .env
MY_SERVICE_CONFIG=value
MY_API_KEY=secretn8n workflow calling Ollama:
// HTTP Request Node in n8n
Method: POST
URL: http://ollama:11434/api/generate
Body: {
  "model": "qwen2.5:7b-instruct-q4_K_M",
  "prompt": "Hello from n8n workflow!"
}External API calling services:
// From external application
const response = await fetch('http://192.168.1.100:8000/webhook/my-webhook', {
  method: 'POST',
  headers: { 'Content-Type': 'application/json' },
  body: JSON.stringify({ message: 'Hello AI LaunchKit!' })
});Deploy multiple instances:
# Server 1: Core AI services
COMPOSE_PROFILES="n8n,flowise,cpu,open-webui,monitoring"
# Server 2: RAG & Vector databases  
COMPOSE_PROFILES="qdrant,weaviate,neo4j,lightrag,ragapp"
# Server 3: Specialized services
COMPOSE_PROFILES="speech,ocr,libretranslate,stirling-pdf,comfyui"This project is licensed under the Apache License 2.0 - see the LICENSE file for details.
Based on AI LaunchKit by Friedemann Schuetz.
Ready to launch your local AI stack?