Skip to content

ngendah/vagrant-k8s

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

55 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Provision a Local Kubernetes cluster with Ansible and Vagrant

Why?

  • To test out different container runtimes.

  • To evaluate tools such as IPVS, AppArmor, Falco e.tc.

  • To use clear, understandable and extensible Ansible playbooks.

Requirements:

Linux

  • Vagrant installed on your local host.

    At the moment the vagrant script requires Virtualbox be installed. However this can easily be changed on the script, Vagrantfile.

  • Ansible version >= 2.10 installed on your local host.

  • kubectl installed on your local host. This is optional.

Windows

Ansible is not supported on Windows and the 'best' solution is to run Vagrant and Ansible on a guest virtual machine.

Getting started

  vagrant up --provision --provider virtualbox

This may take a few minutes. Upon successful completion, a Kubernetes cluster will be running and accessible via the assigned private IP on port 6443.

In addition, the privisioner will create a kubeconfig in the cluster directory, cluster/. You can use it to authenticate and execute commands against the cluster.

For example, to check node status;

  kubectl --kubeconfig ./cluster/kubeconfig get nodes

or

  export KUBECONFIG=$(pwd)/cluster/kubeconfig
  kubectl get nodes

If kubectl is not installed on your local host, you can ssh into the control node and run commands;

  vagrant ssh control01
  kubectl get nodes

After successful provisioning of the cluster, you can manage the nodes as follows;

  • stopping the nodes
  vagrant halt
  • restarting the nodes
  vagrant up
  • destroying the nodes
  vagrant destroy
  • if a node is running to re-provision it
  vagrant provision [node name/virtual machine name]

For additional details on these commands and others, consult Vagrant documentation.

Installed features

Kubernetes Dashboard

  • Kubernetes dashboard

    In order to facilitate dashboard access the provisioner will create a dashboard html stub file on the cluster directory, cluster, together with a corresponding login token. From a file browser double-click on the stub file to open the dashboard.

Metrics

Container runtimes

  • gVisor

    Its runtime class name is gvisor

Policy

IPVS

  • IPVS is installed by default but not enabled.

    To enable and use IPVS:

    1. Edit kube-proxy config-map and set its mode to ipvs:
      kubectl -nkube-system edit cm kube-proxy
    
    1. Re-create all the kube-proxy pods:
      kubectl -nkube-system delete po -l k8s-app=kube-proxy
    

Security

Alternatives

About

Provision a local kubernetes cluster with ansible and vagrant

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages