This repository contains Ansible playbooks and configuration for setting up a Kubernetes cluster using Vagrant with libvirt provider. It creates a three-node cluster with one control plane and two worker nodes, all running on Debian 12.
For Ubuntu:
sudo apt update
sudo apt install -y qemu-system libvirt-dev virt-manager qemu-efi libvirt-daemon-system ebtables libguestfs-tools ruby-fog-libvirt
sudo adduser $USER libvirt
# Homebrew packages for HashiCorp will not receive updates due to BUSL, use apt instead
wget -O - https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(grep -oP '(?<=UBUNTU_CODENAME=).*' /etc/os-release || lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list
sudo apt update
sudo apt install vagrant packer
vagrant plugin install vagrant-libvirt
sudo apt install pipx
pipx ensurepath
pipx install --include-deps ansible
For Fedora:
sudo dnf install @virtualization libvirt-devel virt-manager qemu-efi
sudo adduser $USER libvirt
sudo dnf install -y dnf-plugins-core
sudo dnf config-manager addrepo --from-repofile=https://rpm.releases.hashicorp.com/fedora/hashicorp.repo
sudo dnf -y install vagrant packer
vagrant plugin install vagrant-libvirt
sudo dnf -y install pipx
pipx ensurepath
pipx install --include-deps ansible
The cluster consists of:
- 1 control plane node (2GB RAM, 2 CPUs)
- 2 worker nodes (2GB RAM, 2 CPUs each)
- Private network: 192.168.121.0/24
- Pod network: 10.244.0.0/16
./build-k8s-base-box.sh
This command will:
- Download the Debian 12 base box
- Install system updates and required packages
- Install containerd container runtime
- Install Kubernetes components (kubeadm, kubelet, kubectl)
- Package the resulting VM into a new Vagrant box named 'k8s-base'
vagrant up
This command will:
- Create three virtual machines using the k8s-base box
- Initialize the Kubernetes control plane on k8s-control
- Set up pod networking
- Generate join tokens for worker nodes
- Join worker nodes to the cluster
- Install Helm package manager on the control plane
The kubeconfig file is automatically configured on the control plane node. To access and verify the cluster:
-
SSH into the control plane:
vagrant ssh k8s-control
-
Verify cluster status:
# Check node status kubectl get nodes # View running system pods kubectl get pods -A
You should see three nodes (one control plane and two workers) in Ready state.
When you're done experimenting, you can destroy all VMs:
vagrant destroy -f
vagrant box remove k8s-base
rm -rf output-k8s-base
- Control plane: 192.168.121.10
- Worker 1: 192.168.121.11
- Worker 2: 192.168.121.12
See LICENSE file.