Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cluster-autoscaler scales down nodes before pod informer cache has synced #7419

Open
domenicbozzuto opened this issue Oct 22, 2024 · 1 comment
Labels
area/cluster-autoscaler kind/bug Categorizes issue or PR as related to a bug.

Comments

@domenicbozzuto
Copy link
Contributor

Which component are you using?:
cluster-autoscaler

What version of the component are you using?:
1.28.2, 1.31.0

Component version: 1.28.2, 1.31.0

What k8s version are you using (kubectl version)?:

kubectl version Output
# 1.28
$ kubectl version
Client Version: v1.29.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.28.14-dd.1
# 1.31
$ kubectl version 
Client Version: v1.29.0
Kustomize Version: v5.0.4-0.20230601165947-6ce0bf390ce3
Server Version: v1.31.1-dd.2

What environment is this in?:

Observed in an AWS environment, but can occur in other environments.

What did you expect to happen?:

When the autoscaler has not yet synced its caches (notably it's pod cache), it should not take autoscaling actions.

What happened instead?:

The autoscaler's pod cache was empty; as a result, it wrongly identified nodes as empty and scaled them in, resulting in many workloads being unexpectedly deleted.

How to reproduce it (as minimally and precisely as possible):

It's not consistent to reproduce; in our scenario, the control plane was stressed and was returning many 429s for various API calls. This cluster had a large number of pods (25k+) and nodes (2k+); the Nodes cache was able to be synced after a few retries, but Pods repeatedly hit timeouts for another 20 minutes.

k8s.io/autoscaler/cluster-autoscaler/utils/kubernetes/listers.go:172: Failed to watch *v1.Pod: failed to list *v1.Pod: the server was unable to return a response in the time allotted, but may still be processing the request (get pods)

During this time, the autoscaler incorrectly identified 300+ nodes as empty and managed to improperly scale in 200+ of them before the pod cache synced, at which point it realized these nodes were not actually empty.

Scale-down: couldn't delete empty node, node is not empty, status error: failed to delete empty node "<node>", new pods scheduled

Once the Pod cache synced, the now-identified pods triggered scale ups and the cluster recovered, but not before interrupting all the workloads on those nodes.

Anything else we need to know?:

In a local build of the autoscaler from master, I injected a call to AllPodsLister.List() immediately after the call to informerFactory.Start() and confirmed that this will return an empty slice and no error when the cache is not yet populated.

My initial proposal would be to just add a call to informerFactory.WaitForCacheSync() after the informerFactory is started in buildAutoscaler; this would block the autoscaler's startup until all the caches have synced. However, the cluster-autoscaler has a lot of caches (I saw 17 different caches created in the logs), and wonder if there would be interest in making this more granular to ensure just the most-vital caches are populated (pods + nodes, probably?).

@domenicbozzuto domenicbozzuto added the kind/bug Categorizes issue or PR as related to a bug. label Oct 22, 2024
@domenicbozzuto
Copy link
Contributor Author

/area cluster-autoscaler

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

2 participants