Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not block scaling up due to pending/not yet complete node deletion #4051

Closed
Michael-Sinz opened this issue Apr 30, 2021 · 20 comments
Closed
Labels
area/cluster-autoscaler kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@Michael-Sinz
Copy link

Which component are you using?: Cluster Autoscaler

Is your feature request designed to solve a problem? If so describe the problem this feature should solve.:
When the autoscaler deletes instances in Azure VMSS, it will not scale in instances until thoses delete operations are seen to have completed. There are times when deletes have taken over an hour but during that time many nodes of scale in were needed. In one case, we had over 200 needed new nodes that were not autoscaled in due to the autoscaler not trying the scale in due to the incomplete deletes.

It turns out that if we restart the autoscaler during such conditions, it will do one scale in operation and then get stuck as it notices the deleting/to be deleted nodes again and tries to delete them again and will not scale anymore. During business hours rampup, this can cause significant problems when this happens as our clusters many times scale in hundreds of nodes during this time and if there is still a stuck deleting node from earlier, the clusters have a shortage of compute resources and a service outage happens.

In one case we had a small scale in (3 new nodes) somehow fail within VMSS and they timed out. Then, when trying to delete those "unregistered" instances, they did not delete right away and prevented more scale in attempts. This had significant negative impact into the operation of the autoscaler and again, restarting the autoscaler got it to scale in some nodes before it noticed the "unregistered" instances and tried to delete them again. Another restart of the autoscaler got us another scale in before again trying to delete the "unregistered" instances.

Describe the solution you'd like.:
If the autoscaler just would continue to follow its behavior with respect to scale in while waiting for delete to complete, that would address the problem in situations like this. Basically, do not block scaling in due to deletes. When new nodes are needed, the slow to delete nodes are not going to be helping in any way and should not be part of the consideration.

Describe any alternative solutions you've considered.:
We are looking at just restarting the autoscaler any time we see repeated attempts to delete the same instance. This is very inefficient but does address the issue without deploying a new autoscaler. It is a hack but it has been show (by manual actions) that it works as a remediation of the problem.

Additional context.:
This is running large, very dynamic clusters in Azure - where they regularly scale from, for example, 100 nodes to 600 nodes and back down again due to usage patterns.

@Michael-Sinz Michael-Sinz added the kind/feature Categorizes issue or PR as related to a new feature. label Apr 30, 2021
@dharmab
Copy link
Contributor

dharmab commented May 7, 2021

One mitigation we use is to have multiple VMSSes with identical nodes. This helps constrain VMSS-scoped issues to a subset of the cluster and gives CA an alternative to scale out. This had a large impact on Azure API usage in older versions of Kubernetes, but 1.18+ reduced the impact considerably.

@Michael-Sinz
Copy link
Author

We have done that but at our scale that still ends up with problems. When we tried that it tended to introduce other problems with scaling.

(We get into azure "rate limit" jail during scale down in the evenings since many of those are 1 VM at a time. Scale up is many times 50 to 100 VMs at once but work trickles off the nodes a bit slower.)

@k8s-triage-robot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 5, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 4, 2021
@Michael-Sinz
Copy link
Author

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 7, 2021
@amirschw
Copy link

There are times when deletes have taken over an hour but during that time many nodes of scale in were needed. In one case, we had over 200 needed new nodes that were not autoscaled in due to the autoscaler not trying the scale in due to the incomplete deletes.

+1. We just had a similar case that we only found out about in retrospect where hundreds of pods were stuck in unscheduled state due to a single Azure VMSS instance that took almost 2 hours to delete.

@Michael-Sinz
Copy link
Author

Note that our current process is to, if things look like they are stuck due to the delete not completing, we restart the cluster autoscaler pod. This lets it notice a scaling up case before it gets back to noticing the needed delete and get stuck in the delete attempt.

It is a harsh hack but it helps mitigate the problem temporarily.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 12, 2022
@Michael-Sinz
Copy link
Author

/remove-lifecycle rotten

@Michael-Sinz
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 12, 2022
@ravidbro
Copy link

+1
It's a very painful behavior right now

@marwanad
Copy link
Member

It's important to call-out that this is unregistered node deletions which is an error recovery loop and not your usual VM deletes.

If that's due to regular scale-downs, then there might be an issue in the Azure provider but none of that provider code is blocking in a sense that it will block the CA main loop or scale-ups.

https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/core/static_autoscaler.go#L340-L352

This method issues delete requests and blocks until those get cleared up. That will happen in only two cases:

  1. VM deletion is completed. You're saying it's taking upwards of 2hr.
  2. The cache for VMs is invalidated and returns the deleted VM.

One option would be for the Azure provider to take the instance out of its cache once the delete call its issued. However, the next cache refresh (5 minutes) will bring this instance back so not sure how much better this will make the situation.

One other option which I prefer is for the autoscaler not to block on that. The logic will have to change to something like "Remove old unregistered node and then clear the unregistered list". That way it can make progress.

@Michael-Sinz
Copy link
Author

Yes, it is an error recovery loop but it happens far more often than one may expect. VM deletes or VM scale ins sometimes fail and then become unregistered nodes in that they are listed in the VMSS but are not part of the kubernetes cluster (not joined)

Under strong scaling (adding/removing tens or hundreds of nodes) it is possible for one or more instances to fail and thus end up as unregistered nodes that are attempted to be deleted again (and again, and again, if they take a long time). During that time, if scaling needs to add more nodes, the autoscaler will not add them as it has these unregistered nodes that have yet to finish deleting.

Restarting the autoscaler will reset its internal state and will end up doing a single scale up and then notice that there are unregistered nodes and get stuck in waiting for them to delete before scaling up again.

I think that managing of the unregistered nodes should be outside of the question of scaling requests.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 13, 2022
@Michael-Sinz
Copy link
Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 14, 2022
@amirschw
Copy link

Looks like scaling activity is no longer blocked by deletion of unregistered nodes since #4810.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 15, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 15, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/cluster-autoscaler kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants