A GitOps-driven Kubernetes cluster using K3s, ArgoCD, and Cilium, with integrated Cloudflare Tunnel for secure external access.
- View Documentation Online - Full documentation website
- Local Documentation - Browse documentation in the repository
This repository demonstrates a single-node K3s cluster setup, optimized for home lab and small production environments. While K3s supports multi-node clusters, this setup uses a single node to simplify storage management and reduce complexity.
- Fixed storage location for applications (no need for distributed storage)
- Simplified backup and restore procedures
- Perfect for home lab and small production workloads
- Can be expanded with worker nodes for compute-only scaling
π§ Compute
βββ AMD Threadripper 2950X (16c/32t)
βββ 128GB ECC DDR4 RAM
βββ 2Γ NVIDIA RTX 3090 24GB
βββ Google Coral TPU
πΎ Storage
βββ 4TB ZFS RAID-Z2
βββ NVMe OS Drive
βββ Local Path Storage for K8s
π Network
βββ 2.5Gb Networking
βββ Firewalla Gold
βββ Internal DNS Resolution
- π» A Linux server/VM (can be Proxmox VM, mini PC, NUC, or similar)
- Minimum 4GB RAM (8GB+ recommended)
- 2 CPU cores (4+ recommended)
- 20GB storage (100GB+ recommended for applications)
- Note: These are minimum requirements, see hardware stack above for current setup
- π Domain configured in Cloudflare
- π 1Password account for secrets management
- 1Password Connect credentials and token (setup guide)
- Cloudflare API tokens and tunnel configuration (setup guide)
- π οΈ
kubectl
installed locally - βοΈ
cloudflared
installed locally
While this setup uses a single node, you can add worker nodes for additional compute capacity:
# On worker node
curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken sh -
# Worker nodes can be added without affecting storage, as they:
# - Don't run storage workloads
# - Only handle compute tasks
# - Automatically join the cluster
Note: Storage remains on the main node to maintain data locality and simplify management.
# Install required system packages
sudo apt install zfsutils-linux nfs-kernel-server cifs-utils open-iscsi
sudo apt install --reinstall zfs-dkms
# Install 1Password CLI (follow instructions at https://1password.com/downloads/command-line/)
export SETUP_NODEIP=192.168.10.11
export SETUP_CLUSTERTOKEN=randomtokensecret1234
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="v1.32.0+k3s1" \
INSTALL_K3S_EXEC="--node-ip $SETUP_NODEIP \
--disable=flannel,local-storage,metrics-server,servicelb,traefik \
--flannel-backend='none' \
--disable-network-policy \
--disable-cloud-controller \
--disable-kube-proxy" \
K3S_TOKEN=$SETUP_CLUSTERTOKEN \
K3S_KUBECONFIG_MODE=644 sh -s -
# Setup kubeconfig
mkdir -p $HOME/.kube
sudo cp -i /etc/rancher/k3s/k3s.yaml $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
chmod 600 $HOME/.kube/config
# Install Cilium CLI
CILIUM_CLI_VERSION=$(curl -s https://raw.githubusercontent.com/cilium/cilium-cli/main/stable.txt)
CLI_ARCH=amd64
curl -L --fail --remote-name-all https://github.com/cilium/cilium-cli/releases/download/${CILIUM_CLI_VERSION}/cilium-linux-${CLI_ARCH}.tar.gz
sudo tar xzvfC cilium-linux-${CLI_ARCH}.tar.gz /usr/local/bin
rm cilium-linux-${CLI_ARCH}.tar.gz
# Install Cilium
cilium install \
--version 1.16.3 \
--set k8sServiceHost=${API_SERVER_IP} \
--set k8sServicePort=${API_SERVER_PORT} \
--set kubeProxyReplacement=true \
--helm-set=operator.replicas=1
# Verify installation
cilium status
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/latest/download/experimental-install.yaml
cd to /infra/network/cilium
cilium upgrade -f values.yaml
CoreDNS can be installed in two ways:
# Remove the --disable coredns flag from K3s installation
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION="v1.32.0+k3s1" \
INSTALL_K3S_EXEC="--node-ip $SETUP_NODEIP \
--disable=flannel,local-storage,metrics-server,servicelb,traefik \
--flannel-backend='none' \
--disable-network-policy \
--disable-cloud-controller \
--disable-kube-proxy" \
K3S_TOKEN=$SETUP_CLUSTERTOKEN \
K3S_KUBECONFIG_MODE=644 sh -s -
Use this option if you need to customize CoreDNS configuration:
# First, ensure CoreDNS is disabled in K3s
--disable=flannel,local-storage,metrics-server,servicelb,traefik,coredns
# Then install custom CoreDNS:
k3s kubectl kustomize --enable-helm infra/network/coredns | k3s kubectl apply -f -
# Verify installation
kubectl get pods -n kube-system -l k8s-app=coredns
Key differences:
- Option A: Uses K3s default CoreDNS configuration
- Option B: Allows full customization of CoreDNS settings
- Custom DNS forwarding rules
- Split DNS configuration
- Advanced plugin configuration
# Create required namespaces
kubectl create namespace 1passwordconnect
kubectl create namespace external-secrets
# Generate and apply 1Password Connect credentials
op connect server create # Creates 1password-credentials.json
export CONNECT_TOKEN="your-1password-connect-token"
# Create required secrets
kubectl create secret generic 1password-credentials \
--from-file=1password-credentials.json=credentials.base64 \
--namespace 1passwordconnect
kubectl create secret generic 1password-operator-token \
--from-literal=token=$CONNECT_TOKEN \
--namespace 1passwordconnect
kubectl create secret generic 1passwordconnect \
--from-literal=token=$CONNECT_TOKEN \
--namespace external-secrets
# Install Gateway API CRDs
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/latest/download/experimental-install.yaml
# Install ArgoCD with our custom configuration
k3s kubectl kustomize --enable-helm infra/controllers/argocd | k3s kubectl apply -f -
# Wait for ArgoCD to be ready
kubectl wait --for=condition=available deployment -l app.kubernetes.io/name=argocd-server -n argocd --timeout=300s
# Wait for CRDs to be established
kubectl wait --for=condition=established crd/applications.argoproj.io --timeout=60s
kubectl wait --for=condition=established crd/appprojects.argoproj.io --timeout=60s
#Install Argo apps (WIP)
kubectl apply -f root-apps/project.yaml
kubectl apply -f root-apps/infrastructure.yaml
kubectl get applicationset -n argocd infrastructure -o yaml
kubectl wait --for=condition=synced application/infrastructure -n argocd --timeout=300s
only after
kubectl apply -f root-apps/applications.yaml
This installation method includes:
- Custom plugin configurations (Kustomize with Helm support)
- Resource limits and requests
- Security settings
- CMP (Config Management Plugin) setup
For detailed ArgoCD configuration, see ArgoCD Documentation
# Check core components
kubectl get pods -A
cilium status
# Check ArgoCD
kubectl get application -A
kubectl get pods -n argocd
# Check secrets
kubectl get pods -n 1passwordconnect
kubectl get externalsecret -A
For detailed configuration and advanced setup:
The cluster uses a split network configuration with the following topology:
graph TD
subgraph "External Network"
A[Internet] --> B[Cloudflare]
B --> C[Cloudflare Tunnel]
end
subgraph "Network Hardware"
D[Firewalla Gold] --> E[2.5Gb Switch]
end
subgraph "Internal Network 192.168.10.0/24"
E --> F[K3s Node\n192.168.10.11]
E --> G[Gateway External\n192.168.10.50]
E --> H[Gateway Internal\n192.168.10.51]
E --> I[CoreDNS\n192.168.10.53]
end
subgraph "K8s Networks"
J[Pod Network\n10.42.0.0/16]
K[Service Network\n10.43.0.0/16]
end
C --> G
F --> J
F --> K
I --> H
- Internal access via Gateway API (192.168.10.51)
- External access via Cloudflare Tunnel
- DNS split horizon for internal/external resolution
Detailed Network Documentation
Local path provisioner and SMB storage options:
- Node-specific PV binding
- Storage classes for different use cases
- Volume lifecycle management
Detailed Storage Documentation
Hardware accelerated workloads using:
- NVIDIA GPU Operator
- 2Γ RTX 3090 for AI/ML tasks
- Google Coral TPU for inference
- Optimized for Ollama and ComfyUI
Secure access through:
- Cloudflare Zero Trust
- Split DNS configuration
- Internal certificate management
Detailed Security Documentation
Secure secret handling using:
- 1Password integration
- External Secrets Operator
- Automated secret rotation
- RBAC-based access control
Detailed Secrets Documentation
GitOps workflow using:
- Pure Kubernetes manifests with Kustomize
- Selective Helm chart usage
- Multi-environment management
.
βββ apps/ # Application manifests
β βββ core/ # Core system applications
β βββ monitoring/ # Monitoring stack
β βββ services/ # User applications
βββ docs/ # Documentation
β βββ argocd.md # ArgoCD setup and workflow
β βββ network.md # Network configuration
β βββ security.md # Security setup
β βββ storage.md # Storage configuration
β βββ external-services.md # External services setup
βββ infra/ # Infrastructure components
β βββ root-apps/ # ArgoCD root applications
β βββ base/ # Base infrastructure
βββ sets/ # ApplicationSet configurations
Common issues and solutions:
-
Network Issues π
- Check Gateway API status
- Verify Cloudflare Tunnel connectivity
- Test DNS resolution
-
Storage Issues πΎ
- Verify PV binding
- Check storage provisioner logs
- Validate node affinity
-
ArgoCD Issues β
- Check application sync status
- Verify Git repository access
- Review application logs
- Fork the repository
- Create a feature branch
- Submit a pull request
MIT License - See LICENSE for details
When running K3s with CoreDNS disabled (--disable coredns
), the manual CoreDNS setup requires specific configuration to work properly:
- Service IP: Must use K3s's default DNS IP
10.43.0.10
- Service Name: Must be
kube-dns
for K3s compatibility - Namespace: Deployed in
kube-system
namespace - DNS Configuration:
plugins: - kubernetes: Configured for cluster.local domain - hosts: For node resolution - forward: Using host's /etc/resolv.conf
-
Disable CoreDNS in K3s installation:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable coredns" sh -
-
Apply CoreDNS configuration:
k3s kubectl kustomize --enable-helm infra/network/coredns | k3s kubectl apply -f -
-
Verify DNS resolution:
kubectl get pods -n kube-system -l k8s-app=kube-dns kubectl get svc -n kube-system kube-dns