Automated deployment and configuration for the Vaclab-2 Kubernetes cluster using Ansible and Fleet.
- Homepage (Dev): lab.vac.dev
| Component | Status | Description |
|---|---|---|
| K3s Cluster Provisioning | ✅ OK | K3s cluster deployment via Ansible |
| Helm | ✅ OK | Helm 3 installation on control plane |
| Rancher Fleet | ✅ OK | GitOps deployment engine |
| traefik | ✅ OK | K3S-friendly ingress |
| containerd | ✅ OK | Container Runtime |
| Cert Manager | ✅ OK | Autamed TLS via HTTP-01 challenges |
| Kube-OVN | ✅ OK | Primary CNI |
| Cilium | ✅ OK | Chained CNI |
| Hubble | ✅ OK | Cilium real-time flow observability UI |
| Longhorn | ✅ OK | CSI - Distributed block storage |
| Rancher UI | ✅ OK | Rancher management UI |
| GetHomePage | ✅ OK | Home Page UI (App Launcher) for vaclab |
| Authentik | ✅ OK | Identity & Access Management |
| VictoriaMetrics K8s Stack | ✅ OK | VictoriaMetrics Helm charts, including Grafana |
| VictoriaLogs Cluster | ✅ OK | VictoriaLogs Helm chart |
| Kyverno | ✅ OK | Admission & mutation webhook manager |
| Kyverno default resource request enforcement | ✅ OK | Injection of default resource requests when missing |
| Kyverno Policy Reporter | ✅ OK | Policy Observability UI |
| Vaclab Bandwidth-Aware Scheduler | ✅ OK | Custom bandwidth-aware scheduler |
This repository provides automated deployment and management for the Vaclab-2 Kubernetes infrastructure. It combines Ansible playbooks for initial cluster provisioning with Fleet-managed GitOps for continuous reconciliation. The automation handles:
- K3s Cluster Deployment
- GitOps with Fleet (everyhting under the fleet/ directory is deployed on the cluster)
-
Control Machine (where you run Ansible):
- Linux, macOS, or WSL2
- Python 3.8+
- SSH access to target nodes
-
Target Nodes (cluster machines):
- Ubuntu 24.04
- Sudo privileges for deployment user
On Ubuntu/Debian:
sudo apt update
sudo apt install -y software-properties-common
sudo add-apt-repository --yes --update ppa:ansible/ansible
sudo apt install -y ansibleInstall required Ansible collections for Kubernetes management:
ansible-galaxy collection install kubernetes.core
ansible-galaxy collection install community.generalInstall Python packages needed by Ansible modules:
sudo apt install python3-kubernetes python3-yamlEdit ansible/inventory.yaml to define your cluster nodes IPs and the ansible_user (by default the user is set to ubuntu):
Ensure passwordless SSH access to all nodes:
ssh-copy-id <ansible_user>@<host>Deploy the complete cluster with a single command:
cd ansible
ansible-playbook -i inventory.yaml playbooks/setup_cluster.yaml --vault-password-file ../.vault_pass.txtThis will:
- Install and configure K3s cluster
- Install Helm on the control plane
- Deploy Rancher and Fleet and create a GitRepo resource in order to watch the fleet/ directory inside this git repo
If you prefer to deploy components individually:
cd ansible
ansible-playbook -i inventory.yaml k3s-ansible/playbooks/site.ymlansible-playbook -i inventory.yaml playbooks/setup_cluster.yaml --tags helmansible-playbook -i inventory.yaml playbooks/setup_cluster.yaml --tags fleetkubectl -n fleet-local get gitrepo
kubectl -n cattle-fleet-system get podskubectl taint nodes metal-01.he-eu-hel1.misc.vacdst \
dedicated=monitoring:PreferNoScheduleSymptom: After deploying Authentik, accessing protected services (like Longhorn) may return an error:
{
"Message": "no app for hostname",
"Host": "longhorn.lab.vac.dev",
"Detail": "Check the outpost settings..."
}Root Cause: The k8s-outpost blueprint may fail to assign the k8s-forwardauth provider to the embedded outpost due to a timing/ordering issue during initial deployment.
Verification:
# Check if provider is assigned to outpost
kubectl -n authentik exec authentik-postgresql-0 -- \
env PGPASSWORD=<POSTGRE_PASSWORD> psql -U authentik -d authentik \
-c "SELECT o.name, p.name as provider FROM authentik_outposts_outpost o \
JOIN authentik_outposts_outpost_providers op ON o.uuid = op.outpost_id \
JOIN authentik_core_provider p ON op.provider_id = p.id;"If the query returns no results, the provider is not assigned.
Fix:
# Manually apply the outpost blueprint
kubectl exec -n authentik deployment/authentik-worker -- \
ak apply_blueprint /blueprints/mounted/cm-authentik-blueprints/20-outpost.yaml
# Restart the outpost to pick up changes
kubectl -n authentik rollout restart deployment/authentik-outpostVerification (should return authentik Embedded Outpost | k8s-forwardauth):
kubectl -n authentik exec authentik-postgresql-0 -- \
env PGPASSWORD=<POSTGRE_PASSWORD> psql -U authentik -d authentik \
-c "SELECT o.name, p.name as provider FROM authentik_outposts_outpost o \
JOIN authentik_outposts_outpost_providers op ON o.uuid = op.outpost_id \
JOIN authentik_core_provider p ON op.provider_id = p.id;"