Skip to content

QC-Labs/orange-lab

Repository files navigation

OrangeLab

Ask DeepWiki

Private infrastructure for cloud natives.

OrangeLab logo

Core components

Principles and goals

  • decentralized - uses your physical machines potentially spread out over geographical locations, minimise dependency on external services and cloud providers
  • private by default - uses Tailscale/WireGuard for end to end encrypted communication, making services public has to be explicitly defined
  • OSS - prefer open source components that can be run locally
  • automation - use Pulumi and Helm to automate most tasks and configuration
  • easy to use - no deep Kubernetes knowledge required, sensible defaults
  • offline mode - continue working (with some limitations) over local network when internet connection lost
  • lightweight - can be run on a single laptop using default configuration, focus on consumer hardware
  • scalable - distribute workloads across multiple machines as they become available, optional use of cloud instances for autoscaling
  • self-healing - in case of problems, the system should recover with no user intervention
  • immutable - no snowflakes, as long as there is at least one Longhorn replica available, components can be destroyed and easily recreated

Applications

System module:

  • amd-gpu-operator - AMD GPU support
  • cert-manager - certificate management
  • longhorn - replicated storage
  • minio - S3-compatible storage (used as Longhorn backup target)
  • nfd - Node Feature Discovery (GPU autodetection)
  • nvidia-gpu-operator - NVidia GPU support
  • tailscale-operator - ingress support with Tailscale authentication

Monitoring module:

  • beszel - Beszel lightweight monitoring
  • prometheus - Prometheus/Grafana monitoring

IoT module:

  • home-assistant - sensor and home automation platform

AI module:

  • automatic1111 - Automatic1111 Stable Diffusion WebUI
  • kubeai - Ollama and vLLM models over OpenAI-compatible API
  • invokeai - generative AI plaform, community edition
  • ollama - local large language models
  • open-webui - Open WebUI frontend
  • sdnext - SD.Next Stable Diffusion WebUI

Bitcoin module:

  • bitcoin-core - Bitcoin Core full node
  • bitcoin-knots - Bitcoin Knots full node
  • electrs - Electrs (Electrum) server implementation

Platforms and limitations

Installation instructions assume your machines are running Bluefin (Developer edition, https://projectbluefin.io/) based on Fedora Silverblue unless otherwise noted. It should run on any modern Linux distribution with Linux kernel 6.11.6+, even including Raspberry Pi.

Windows and MacOS support is limited, specifically they cannot be used as storage nodes.

See Disabling Longhorn Guide with instructions on using local-path-provisioner instead of Longhorn.

Both NVIDIA and AMD GPUs are supported. See AMD GPU support for more information.

Installation

After system components have been deployed, you can add any of the optional #Applications. Details in each module documentation.

For general application configuration and deployment instructions, see Configuration Guide.

Documentation