Private infrastructure for cloud natives.
- Pulumi (https://www.pulumi.com/) - configuration management, deployments and infrastructure as code
- Tailscale (https://tailscale.com/) - end-to-end encrypted communication between nodes
- K3s (https://k3s.io/) - lightweight Kubernetes cluster
- Longhorn (https://longhorn.io/) - distributed storage
- decentralized - uses your physical machines potentially spread out over geographical locations, minimise dependency on external services and cloud providers
- private by default - uses Tailscale/WireGuard for end to end encrypted communication, making services public has to be explicitly defined
- OSS - prefer open source components that can be run locally
- automation - use Pulumi and Helm to automate most tasks and configuration
- easy to use - no deep Kubernetes knowledge required, sensible defaults
- offline mode - continue working (with some limitations) over local network when internet connection lost
- lightweight - can be run on a single laptop using default configuration, focus on consumer hardware
- scalable - distribute workloads across multiple machines as they become available, optional use of cloud instances for autoscaling
- self-healing - in case of problems, the system should recover with no user intervention
- immutable - no snowflakes, as long as there is at least one Longhorn replica available, components can be destroyed and easily recreated
amd-gpu-operator
- AMD GPU supportcert-manager
- certificate managementlonghorn
- replicated storageminio
- S3-compatible storage (used as Longhorn backup target)nfd
- Node Feature Discovery (GPU autodetection)nvidia-gpu-operator
- NVidia GPU supporttailscale-operator
- ingress support with Tailscale authentication
beszel
- Beszel lightweight monitoringprometheus
- Prometheus/Grafana monitoring
home-assistant
- sensor and home automation platform
automatic1111
- Automatic1111 Stable Diffusion WebUIkubeai
- Ollama and vLLM models over OpenAI-compatible APIinvokeai
- generative AI plaform, community editionollama
- local large language modelsopen-webui
- Open WebUI frontendsdnext
- SD.Next Stable Diffusion WebUI
bitcoin-core
- Bitcoin Core full nodebitcoin-knots
- Bitcoin Knots full nodeelectrs
- Electrs (Electrum) server implementation
Installation instructions assume your machines are running Bluefin (Developer edition, https://projectbluefin.io/) based on Fedora Silverblue unless otherwise noted. It should run on any modern Linux distribution with Linux kernel 6.11.6+, even including Raspberry Pi.
Windows and MacOS support is limited, specifically they cannot be used as storage nodes.
See Disabling Longhorn Guide with instructions on using local-path-provisioner
instead of Longhorn.
Both NVIDIA and AMD GPUs are supported. See AMD GPU support for more information.
- Installation - Setup Guide - Initial Pulumi and Tailscale setup
- Installation - SSH Configuration (optional) - Configure SSH keys on nodes for easier access
- Installation - Node Configuration - Configure nodes (firewall, suspend settings)
- Installation - K3s Cluster - Install Kubernetes cluster and label nodes
- components/system/SYSTEM.md - Deploy system components
After system components have been deployed, you can add any of the optional #Applications. Details in each module documentation.
For general application configuration and deployment instructions, see Configuration Guide.
- Ask Devin/DeepWiki - AI generated documentation and good place to ask questions
- Configuration Guide - Application configuration and deployment
- Upgrade Guide - Upgrading your OrangeLab installation
- Disabling Longhorn - Running OrangeLab without distributed storage
- AMD GPU support - Using AMD GPUs with OrangeLab
- Electrs Wallet Guide - Connecting Bitcoin wallets to your Electrs server
- Backup and Restore - Using Longhorn backups with S3 storage
- Troubleshooting - Common issues and solutions