-
Notifications
You must be signed in to change notification settings - Fork 224
Kubernetes v1.12.x doesn't restore pod-checkpointer #1001
Comments
Hi Dalton! The following issue and PR might be relevant to this issue: kubernetes/kubernetes#69346 kubernetes/kubernetes#69566. Are you using a kubelet < 1.11 ? |
The Kubelet matches the control plane version in my clusters. |
Iterating through the v1.12 pre-releases, it seems this started happening between v1.12.0-beta.2 and v1.12.0-rc.1 (comparison). v1.12.0-beta.2 doesn't bootstrap (due to various Kubernetes bugs), but it gets far enough to show the pod-checkpointer's checkpoint pod gets created (i.e. there are two pods). In rel: That's as far as I've made it so far. It might be beneficial to post a PR to bootkube attempting to bump to v1.12.1 to confirm others can repro the original issue. And then I suspect something within those 88 commits upstream. |
Agreed... I am getting:
which looks like this kubernetes/kubernetes#65153 upstream issue. |
Removing the nodeselector statements from both the checkpointer and apiserver checkpoint files restores the pods correctly. |
I see that as well if I delete the DaemonSet pod-checkpointer. The checkpointed pod can't schedule. Its a great tip, its easier to see what's going on doing this from a running cluster (rather than after power cycling). Comparing actual checkpointed pod manifests btw a v1.11.3 cluster and a v1.12.1 cluster, I see a difference.
Maybe related to kubernetes/kubernetes#68173 which was not in |
I suppose it is unusual checkpoints have a node selector or affinity at all since they're pod on disk and should always run on that node. But looking at the checkpoint manifest in v1.11.3, those also had a I tried a similar experiment to yours, launching a v1.12.1 cluster, power cycling it, and but then modifying the pod-checkpointer and apiserver checkpoint files to remove the Of course, as soon as the cluster recovers, the pod-checkpointer overwrites the checkpoint file to include an affinity again. So only one of the two pods it running and I'd expect the same issue on the next power cycle. Perhaps pod-checkpointer should strip the |
I am thinking the affinity should be set to nil if matchExpressions == nil. |
Sound reasonable to me. I wonder if |
* Mount an empty dir for the controller-manager to work around kubernetes/kubernetes#68973 * Use a patched pod-checkpointer that strips affinity from checkpointed pod manifests. Kubernetes v1.12.0-rc.1 introduced a default affinity that appears on checkpointed manifests; but it prevented scheduling and checkpointed pods should not have an affinity, they're run directly by the Kubelet on the local node * kubernetes-retired/bootkube#1001 * kubernetes/kubernetes#68173
* Mount an empty dir for the controller-manager to work around kubernetes/kubernetes#68973 * Use a patched pod-checkpointer that strips affinity from checkpointed pod manifests. Kubernetes v1.12.0-rc.1 introduced a default affinity that appears on checkpointed manifests; but it prevented scheduling and checkpointed pods should not have an affinity, they're run directly by the Kubelet on the local node * kubernetes-retired/bootkube#1001 * kubernetes/kubernetes#68173
* Mount an empty dir for the controller-manager to work around kubernetes/kubernetes#68973 * Use a patched pod-checkpointer that strips affinity from checkpointed pod manifests. Kubernetes v1.12.0-rc.1 introduced a default affinity that appears on checkpointed manifests; but it prevented scheduling and checkpointed pods should not have an affinity, they're run directly by the Kubelet on the local node * kubernetes-retired/bootkube#1001 * kubernetes/kubernetes#68173
* Mount an empty dir for the controller-manager to work around kubernetes/kubernetes#68973 * Update coreos/pod-checkpointer to strips affinity from checkpointed pod manifests. Kubernetes v1.12.0-rc.1 introduced a default affinity that appears on checkpointed manifests; but it prevented scheduling and checkpointed pods should not have an affinity, they're run directly by the Kubelet on the local node * kubernetes-retired/bootkube#1001 * kubernetes/kubernetes#68173
* Mount an empty dir for the controller-manager to work around kubernetes/kubernetes#68973 * Update coreos/pod-checkpointer to strip affinity from checkpointed pod manifests. Kubernetes v1.12.0-rc.1 introduced a default affinity that appears on checkpointed manifests; but it prevented scheduling and checkpointed pods should not have an affinity, they're run directly by the Kubelet on the local node * kubernetes-retired/bootkube#1001 * kubernetes/kubernetes#68173
* Mount an empty dir for the controller-manager to work around kubernetes/kubernetes#68973 * Update coreos/pod-checkpointer to strip affinity from checkpointed pod manifests. Kubernetes v1.12.0-rc.1 introduced a default affinity that appears on checkpointed manifests; but it prevented scheduling and checkpointed pods should not have an affinity, they're run directly by the Kubelet on the local node * kubernetes-retired/bootkube#1001 * kubernetes/kubernetes#68173
* Mount an empty dir for the controller-manager to work around kubernetes/kubernetes#68973 * Update coreos/pod-checkpointer to strip affinity from checkpointed pod manifests. Kubernetes v1.12.0-rc.1 introduced a default affinity that appears on checkpointed manifests; but it prevented scheduling and checkpointed pods should not have an affinity, they're run directly by the Kubelet on the local node * kubernetes-retired/bootkube#1001 * kubernetes/kubernetes#68173
* Mount an empty dir for the controller-manager to work around kubernetes/kubernetes#68973 * Update coreos/pod-checkpointer to strip affinity from checkpointed pod manifests. Kubernetes v1.12.0-rc.1 introduced a default affinity that appears on checkpointed manifests; but it prevented scheduling and checkpointed pods should not have an affinity, they're run directly by the Kubelet on the local node * kubernetes-retired/bootkube#1001 * kubernetes/kubernetes#68173
* Mount an empty dir for the controller-manager to work around kubernetes/kubernetes#68973 * Update coreos/pod-checkpointer to strip affinity from checkpointed pod manifests. Kubernetes v1.12.0-rc.1 introduced a default affinity that appears on checkpointed manifests; but it prevented scheduling and checkpointed pods should not have an affinity, they're run directly by the Kubelet on the local node * kubernetes-retired/bootkube#1001 * kubernetes/kubernetes#68173
* Mount an empty dir for the controller-manager to work around kubernetes/kubernetes#68973 * Update coreos/pod-checkpointer to strip affinity from checkpointed pod manifests. Kubernetes v1.12.0-rc.1 introduced a default affinity that appears on checkpointed manifests; but it prevented scheduling and checkpointed pods should not have an affinity, they're run directly by the Kubelet on the local node * kubernetes-retired/bootkube#1001 * kubernetes/kubernetes#68173
The issue with the pod-checkpointer was closed by #1009. Thanks @rphillips! The new image is It can be used with v1.12 or prior versions too, not really tied to v1.12. I'm closing since actually upgrading to v1.12 is separate and is continuing in #1003 |
* Mount an empty dir for the controller-manager to work around kubernetes/kubernetes#68973 * Update coreos/pod-checkpointer to strip affinity from checkpointed pod manifests. Kubernetes v1.12.0-rc.1 introduced a default affinity that appears on checkpointed manifests; but it prevented scheduling and checkpointed pods should not have an affinity, they're run directly by the Kubelet on the local node * kubernetes-retired/bootkube#1001 * kubernetes/kubernetes#68173
Kubernetes v1.12.x doesn't currently work with the pod-checkpointer. In my exploration so far, bootstrapping a v1.12.1 cluster succeeds (workaround one known issue) and the pod-checkpointer checkpoints itself to
/etc/kubernetes/manifests
and moves the apiserver checkpoint to inactive. Normal so far.For sanity sake, the following work alright as well:
In the past, the "checkpoint" meant there was a 2nd pod running in a typical cluster.
Starting in v1.12, only
pod-checkpointer-2kflw
exists. With verbosity turned up, the Kubelet on the controller continuously reports that:This becomes a serious issue when power cycling the cluster. The Kubelet starts, reads static manifests from
/etc/kubernetes/manifests
(containing the checkpointed pod-checkpointer), and logs that its skipping creating the pod-checkpointer. As a result, the cluster does not return.I'm still hunting for the upstream commit that may have altered handling for static/mirror pods.
The text was updated successfully, but these errors were encountered: