Why?
-
To test out different container runtimes.
-
To evaluate tools such as IPVS, AppArmor, Falco e.tc.
-
To use clear, understandable and extensible Ansible playbooks.
-
Vagrant installed on your local host.
At the moment the vagrant script requires Virtualbox be installed. However this can easily be changed on the script,
Vagrantfile. -
Ansible version >= 2.10 installed on your local host.
-
kubectl installed on your local host. This is optional.
Ansible is not supported on Windows and the 'best' solution is to run Vagrant and Ansible on a guest virtual machine.
vagrant up --provision --provider virtualbox
This may take a few minutes. Upon successful completion, a Kubernetes cluster will be running and accessible via the assigned private IP on port 6443.
In addition, the privisioner will create a kubeconfig in the cluster directory, cluster/. You can use it to authenticate and execute commands against the cluster.
For example, to check node status;
kubectl --kubeconfig ./cluster/kubeconfig get nodes
or
export KUBECONFIG=$(pwd)/cluster/kubeconfig
kubectl get nodes
If kubectl is not installed on your local host, you can ssh into the control node and run commands;
vagrant ssh control01
kubectl get nodes
After successful provisioning of the cluster, you can manage the nodes as follows;
- stopping the nodes
vagrant halt
- restarting the nodes
vagrant up
- destroying the nodes
vagrant destroy
- if a node is running to re-provision it
vagrant provision [node name/virtual machine name]
For additional details on these commands and others, consult Vagrant documentation.
-
In order to facilitate dashboard access the provisioner will create a dashboard html stub file on the cluster directory,
cluster, together with a corresponding login token. From a file browser double-click on the stub file to open the dashboard.
-
Its runtime class name is
gvisor
-
IPVS is installed by default but not enabled.
To enable and use IPVS:
- Edit
kube-proxyconfig-map and set itsmodetoipvs:
kubectl -nkube-system edit cm kube-proxy- Re-create all the
kube-proxypods:
kubectl -nkube-system delete po -l k8s-app=kube-proxy - Edit