Several charts default values prevent an operator from draining a worker #7127
Description
Version of Helm and Kubernetes:
# kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.8", GitCommit:"c138b85178156011dc934c2c9f4837476876fb07", GitTreeState:"clean", BuildDate:"2018-06-18T14:12:08Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.8", GitCommit:"c138b85178156011dc934c2c9f4837476876fb07", GitTreeState:"clean", BuildDate:"2018-06-18T14:12:08Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"linux/amd64"}
caasp-admin:~ # helm version
Client: &version.Version{SemVer:"v2.8.2", GitCommit:"2.8.2", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Which chart:
stable/nginx-ingress and stable/memcached (these are the ones I noticed!)
What happened:
The default configuration is to create a single replica, in addition to a PodDisruptionBudget
with a minAvailable
of 1. This results in Kubernetes being unable to drain the node, as doing so would violate the PodDisruptionBudget
.
What you expected to happen:
The helm charts should not install a PodDisruptionBudget
when the number of replicas for the pod is 1.
How to reproduce it (as minimally and precisely as possible):
- helm install stable/memcached
- kubectl drain {Node that the memcached pod landed on}
Anything else we need to know:
There is a kubernetes issue that is semi-related - essentially saying "works as intended" - kubernetes/kubernetes#48307 - I think I agree with the outcome, the helm charts have defined an "unreasonable" availability requirement, and kubernetes shouldn't ignore the availability requirements it given - even if unreasonable.