Skip to content

Latest commit

 

History

History

clr-k8s-examples

How to setup the cluster

Prerequisite

This setup currently will work with k8s 1.14 & above. Any version of k8s before that might work, but is not guaranteed.

Sample multi-node vagrant setup

To be able to test this tool, you can create a 3-node vagrant setup. In this tutorial, we will talk about using libvirt, but you can use any hypervisor that you are familiar with.

  • Install vagrant on the distro you are using. Steps can be found at Vagrant docs
  • vagrant up --provider=libvirt

Now you have a 3 node cluster up and running. Each of them have 2 vCPU, 4GB Memory, 2x10GB disks, 1 additional private network. Customize the setup using environment variables. E.g., NODES=1 MEMORY=8192 CPUS=8 vagrant up --provider=libvirt

To login to the master node and change to this directory

vagrant ssh clr-01
cd clr-k8s-examples

Setup the nodes in the cluster

Run setup_system.sh once on each and every node (master and workers) to ensure k8s works on it.

This script ensures the following

  • Installs the bundles the Clearlinux needs to support Kubernetes, CRIO and Kata
  • Customizes the system to ensure correct defaults are setup (IP Forwarding, Swap off,...)
  • Ensures all the dependencies are loaded on boot (kernel modules)

NOTE: This step is done automatically if using vagrant.

Enabling experimental firecracker support

EXPERIMENTAL: Optionally run setup_kata_firecracker.sh to be able to use firecracker VMM with Kata.

The firecracker setup switches the setup to use a sparse file backed loop device for devicemapper storage. This should not be used for production.

NOTE: This step is done automatically if using vagrant.

Bring up the master

Run create_stack.sh on the master node. This sets up the master and also uses kubelet config via kubeadm.yaml to propagate cluster wide kubelet configuration to all workers. Customize it if you need to setup other cluster wide properties.

There are different flavors to install, run ./create_stack.sh help to get more information.

# default shows help
./create_stack.sh <subcommand>

Join Workers to the cluster

kubeadm join <master-ip>:<master-port> --token <token> --discovery-token-ca-cert-hash <hash> --cri-socket=/run/crio/crio.sock

Note: Remember to append --cri-socket=/run/crio/crio.sock to the join command generated by the master.

On workers just use the join command that the master spits out. There nothing else you need to run on the worker. All the other k8s customizations are pushed in from master via the values setup in the kubeadm.yaml file.

So if you want to customize the kubelet on the master or the workers (things like resource reservations etc), update this file (when the cluster is created). The master will push this configuration automatically to every worker node that joins in.

Running Kata Workloads

The cluster is setup out of the box to support Kata via runtime class. Clearlinux will also setup kata automatically on all nodes. So running a workload with runtime class set to "kata" will launch the POD/Deployment with Kata.

An example is

kubectl apply -f tests/deploy-svc-ing/test-deploy-kata-qemu.yaml

Running Kata Workloads with Firecracker

EXPERIMENTAL: If firecracker setup has been enabled, runtime class set to "kata-fc" will launch the POD/Deployment with firecracker as the isolation mechanism for Kata.

An example is

kubectl apply -f tests/deploy-svc-ing/test-deploy-kata-fc.yaml

Making Kata the default runtime using admission controller

If you want to run a cluster where kata is used by default, except for workloads we know for sure will not work with kata, using admission webhook and sample admission controller, follow admit-kata README.md

Accessing control plane services

Pre-req

You need to have credentials of the cluster, on the computer you will be accessing the control plane services from. If it is not under $HOME/.kube, set KUBECONFIG environment variable for kubectl to find.

Dashboard

kubectl proxy # starts serving on 127.0.0.1:8001

Dashboard is available at this URL http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy

Kibana

Start proxy same as above. Kibana is available at this URL http://localhost:8001/api/v1/namespaces/kube-system/services/kibana-logging/proxy/app/kibana

Grafana

kubectl -n monitoring port-forward svc/grafana 3000

Grafana is available at this URL http://localhost:3000 . Default credentials are admin/admin. Upon entering you will be asked to chose a new password.

Cleaning up the cluster (Hard reset to a clean state)

Run reset_stack.sh on all the nodes