Skip to content

Latest commit

 

History

History
59 lines (43 loc) · 1.75 KB

README.adoc

File metadata and controls

59 lines (43 loc) · 1.75 KB

Singleton Service

In this example we see see how a PodDisruptionBudget works. As the other examples we assumes a Kubernetes installation available. Check the INSTALL documentation for the installation how you can use an online Kubernetes playground. We can’t use Minikube in this example because we need a 'real' cluster from where we can drain a node to see the effect of a PodDisruptionBudget.

First, let’s create a Deployment with six Pods:

kubectl create -f https://k8spatterns.io/SingletonService/deployment.yml

We can check on which nodes the Pods are running with

kubectl get pods -o=custom-columns=NAME:.metadata.name,NODE:.spec.nodeName

For being sure that always four Pods are running, we create a PodDisruptionBudget with

kubectl create -f https://k8spatterns.io/SingletonService/pdb.yml

Now let’s drain a node and see how the Pods are relocated. Let’s say that the second node is called node01 (that’s the case for a Katakoda setup)

kubectl drain --ignore-daemonsets node01 >/dev/null 2>&1 &

We do the removing of the node in the background, so that we can watch how the Pods are relocated:

watch kubectl get pods -o=custom-columns=NAME:.metadata.name,NODE:.spec.nodeName

As you can see, there are always at least four Pods running on both nodes until eventually all six Pods are running on the master node.

You can undo the drain operation with

kubectl patch node node01 -p '{"spec":{"unschedulable":false}}'

and restore the Deployment with

kubectl scale deployment random-generator --replicas 0
kubectl scale deployment random-generator --replicas 6