Try out your favourite tools and tech stack locally! 🚀
This repository shows you step-by-step on how to run a complete environment locally to test:
This drawing shows a brief overview on what we're trying to achieve:
-
Deploy Devstack locally, see this repository on how to do this on top of KVM.
-
Download the OpenStack RC file via Horizon.
-
Create a
minikube
cluster:
This assumes that you have a KVM network called devstack_net
available.
minikube start --driver=kvm2 --kvm-network=devstack_net
- Download
clusterctl
, change the destination directory if needed:
curl -L https://github.com/kubernetes-sigs/cluster-api/releases/download/v1.9.5/clusterctl-linux-amd64 -o ~/.local/bin/clusterctl
- Install CAPO in the managment cluster (
minikube
):
export CLUSTER_TOPOLOGY=true
kubectl apply -f https://github.com/k-orc/openstack-resource-controller/releases/latest/download/install.yaml
clusterctl init --infrastructure openstack
- Build an image using
image-builder
, i used theqemu
builder or (untested at the moment) the OpenStack builder. Thebuild-qemu-ubuntu-2404
make target was broken when writing this. I built an22.04
image like this:
git clone https://github.com/kubernetes-sigs/image-builder.git
image-builder/images/capi/
make build-qemu-ubuntu-2204
- Upload the built image to OpenStack if you built it using anything else than the OpenStack builder:
openstack image create "ubuntu-2204-kube-v1.31.6" \
--progress \
--disk-format qcow2 \
--property os_type=linux \
--property os_distro=ubuntu2204 \
--public \
--file output/ubuntu-2204-kube-v1.31.4/ubuntu-2204-kube-v1.31.6
- Create a SSH keypair:
openstack keypair create --type ssh k8s-devstack01
Take a note of that the private SSH key and store it somewhere safe.
- Install needed CAPO prerequisites and generate cluster manifests:
Make sure you've prepared your clouds.yaml
accordingly.
wget https://raw.githubusercontent.com/kubernetes-sigs/cluster-api-provider-openstack/master/templates/env.rc -O /tmp/env.rc
source /tmp/env.rc clouds.yaml openstack
Export more environment variables that we'll need to define the workload cluster:
export KUBERNETES_VERSION=v1.31.6
export OPENSTACK_DNS_NAMESERVERS=1.1.1.1
export OPENSTACK_FAILURE_DOMAIN=nova
export OPENSTACK_CONTROL_PLANE_MACHINE_FLAVOR=m1.medium
export OPENSTACK_NODE_MACHINE_FLAVOR=m1.medium
export OPENSTACK_IMAGE_NAME=ubuntu-2204-kube-v1.31.6
export OPENSTACK_SSH_KEY_NAME=k8s-devstack01
export OPENSTACK_EXTERNAL_NETWORK_ID=<ID>
export CLUSTER_NAME=k8s-devstack01
export CONTROL_PLANE_MACHINE_COUNT=1
export WORKER_MACHINE_COUNT=0
Please note that you'll need to fetch the public
network ID and add it to the OPENSTACK_EXTERNAL_NETWORK_ID
environment variable. Also the flavor needs to have at least 2 cores otherwise kubeadm
will fail, this can be ignored from a kubeadm
perspective but that's not covered here.
- Generate the cluster manifests and apply them in the
minikube
cluster:
clusterctl generate cluster k8s-devstack01 --infrastructure openstack > k8s-devstack01.yaml
kubectl apply -f k8s-devstack01.yaml
- Check the status of the cluster using
clusterctl
, also check the logs of, primarily, thecapo-controller
:
clusterctl describe cluster k8s-devstack01
NAME READY SEVERITY REASON SINCE MESSAGE
Cluster/k8s-devstack01 True 14m
├─ClusterInfrastructure - OpenStackCluster/k8s-devstack01
└─ControlPlane - KubeadmControlPlane/k8s-devstack01-control-plane True 14m
└─Machine/k8s-devstack01-control-plane-zkjdn True 15m
- Download the cluster kubeconfig and test connectivity:
clusterctl get kubeconfig k8s-devstack01 > k8s-devstack01.kubeconfig
export KUBECONFIG=k8s-devstack01.kubeconfig
You should now be able to reach the cluster running within the DevStack environment! 🎉
- Install a CNI (Cilium), manually for now:
helm repo add cilium https://helm.cilium.io/
helm upgrade --install cilium cilium/cilium --version 1.17.1 \
--namespace kube-system \
--set hubble.enabled=false \
--set envoy.enabled=false \
--set operator.replicas=1
- Install the OpenStack Cloud Provider:
git clone --depth=1 https://github.com/kubernetes-sigs/cluster-api-provider-openstack.git
Generate the external cloud provider configuration with the provided helper script:
./templates/create_cloud_conf.sh ~/Downloads/clouds.yaml openstack > /tmp/cloud.conf
Note that if you want support for creating Service
of type: LoadBalancer
you'll need to configure this in the cloud.conf
and re-create the secret.
Create the needed secret:
kubectl create secret -n kube-system generic cloud-config --from-file=/tmp/cloud.conf
Create the needed Kubernetes resources for the OpenStack cloud provider:
helm repo add cpo https://kubernetes.github.io/cloud-provider-openstack
helm repo update
helm upgrade --install \
openstack-ccm cpo/openstack-cloud-controller-manager \
--namespace kube-system \
--values occm-values.yaml
If everything went as expected pending Pods should've been scheduled and all Pods shall have IP addresses assigned to them.
- Done! 🚀