You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Which component are you using?:
cluster-autoscaler
What version of the component are you using?:
1.28.2, 1.31.0
Component version: 1.28.2, 1.31.0
What k8s version are you using (kubectl version)?:
kubectl version Output
# 1.28
$ kubectl version
Client Version: v1.29.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.14-dd.1
# 1.31
$ kubectl version
Client Version: v1.29.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.31.1-dd.2
What environment is this in?:
Observed in an AWS environment, but can occur in other environments.
What did you expect to happen?:
When the autoscaler has not yet synced its caches (notably it's pod cache), it should not take autoscaling actions.
What happened instead?:
The autoscaler's pod cache was empty; as a result, it wrongly identified nodes as empty and scaled them in, resulting in many workloads being unexpectedly deleted.
How to reproduce it (as minimally and precisely as possible):
It's not consistent to reproduce; in our scenario, the control plane was stressed and was returning many 429s for various API calls. This cluster had a large number of pods (25k+) and nodes (2k+); the Nodes cache was able to be synced after a few retries, but Pods repeatedly hit timeouts for another 20 minutes.
k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:172: Failed to watch *v1.Pod: failed to list *v1.Pod: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)
During this time, the autoscaler incorrectly identified 300+ nodes as empty and managed to improperly scale in 200+ of them before the pod cache synced, at which point it realized these nodes were not actually empty.
Scale-down: couldn't delete empty node, node is not empty, status error: failed to delete empty node "<node>", new pods scheduled
Once the Pod cache synced, the now-identified pods triggered scale ups and the cluster recovered, but not before interrupting all the workloads on those nodes.
Anything else we need to know?:
In a local build of the autoscaler from master, I injected a call to AllPodsLister.List() immediately after the call to informerFactory.Start() and confirmed that this will return an empty slice and no error when the cache is not yet populated.
My initial proposal would be to just add a call to informerFactory.WaitForCacheSync() after the informerFactory is started in buildAutoscaler; this would block the autoscaler's startup until all the caches have synced. However, the cluster-autoscaler has a lot of caches (I saw 17 different caches created in the logs), and wonder if there would be interest in making this more granular to ensure just the most-vital caches are populated (pods + nodes, probably?).
The text was updated successfully, but these errors were encountered:
Which component are you using?:
cluster-autoscaler
What version of the component are you using?:
1.28.2, 1.31.0
Component version: 1.28.2, 1.31.0
What k8s version are you using (
kubectl version
)?:kubectl version
OutputWhat environment is this in?:
Observed in an AWS environment, but can occur in other environments.
What did you expect to happen?:
When the autoscaler has not yet synced its caches (notably it's pod cache), it should not take autoscaling actions.
What happened instead?:
The autoscaler's pod cache was empty; as a result, it wrongly identified nodes as empty and scaled them in, resulting in many workloads being unexpectedly deleted.
How to reproduce it (as minimally and precisely as possible):
It's not consistent to reproduce; in our scenario, the control plane was stressed and was returning many 429s for various API calls. This cluster had a large number of pods (25k+) and nodes (2k+); the Nodes cache was able to be synced after a few retries, but Pods repeatedly hit timeouts for another 20 minutes.
During this time, the autoscaler incorrectly identified 300+ nodes as empty and managed to improperly scale in 200+ of them before the pod cache synced, at which point it realized these nodes were not actually empty.
Once the Pod cache synced, the now-identified pods triggered scale ups and the cluster recovered, but not before interrupting all the workloads on those nodes.
Anything else we need to know?:
In a local build of the autoscaler from master, I injected a call to
AllPodsLister.List()
immediately after the call toinformerFactory.Start()
and confirmed that this will return an empty slice and no error when the cache is not yet populated.My initial proposal would be to just add a call to
informerFactory.WaitForCacheSync()
after the informerFactory is started inbuildAutoscaler
; this would block the autoscaler's startup until all the caches have synced. However, the cluster-autoscaler has a lot of caches (I saw 17 different caches created in the logs), and wonder if there would be interest in making this more granular to ensure just the most-vital caches are populated (pods + nodes, probably?).The text was updated successfully, but these errors were encountered: