Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

azure: StrictCacheUpdates to disable proactive vmss cache updates #7481

Merged
merged 2 commits into from
Nov 11, 2024

Conversation

jackfrancis
Copy link
Contributor

@jackfrancis jackfrancis commented Nov 9, 2024

What type of PR is this?

/kind bug

What this PR does / why we need it:

This PR adds a StrictCacheUpdates configuration for the Azure cluster-autoscaler provider (triggered by the AZURE_STRICT_CACHE_UPDATES runtime environment variable) that enables a modified Azure VMSS delete flow so that the local CA cache representation of node pool replica count is only updated upon a successful delete.

The current predictive cache update implementation was originally implemented here:

The above works in most cases, but in situations where VMSS delete operations repeatedly fail, we see the the local CA cache gradually decrementing itself during each subsequent delete operation, which has the side-effect of eventually sending a VMSS replica update with a significantly reduced replica count when Azure API errors cease.

Which issue(s) this PR fixes:

Fixes #7432

Special notes for your reviewer:

Does this PR introduce a user-facing change?

azure: StrictCacheUpdates to disable proactive vmss cache updates

Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


Signed-off-by: Jack Francis <jackfrancis@gmail.com>
@k8s-ci-robot k8s-ci-robot added do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. kind/bug Categorizes issue or PR as related to a bug. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Nov 9, 2024
@k8s-ci-robot k8s-ci-robot added area/cluster-autoscaler approved Indicates a PR has been approved by an approver from all required OWNERS files. area/provider/azure Issues or PRs related to azure provider size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Nov 9, 2024
Copy link
Contributor

@comtalyst comtalyst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like this is moving away from the existing idea of "proactively update size on deletion start, then invalidate cache if deletion fails" and going for "update size when deletion is complete".

I think both ideas have their own issues by nature, both from relying on an outdated assumption/prediction:

  • Existing "proactively update" idea can have inconsistency on scale-up while waiting for an already-dead scale-down
    • Inconsistency after dead scale-down should be solved by refreshing cache on all layers
  • New idea is more prone to scaling down below min count on scale-down while waiting for another scale-down to complete

Maybe incorporating the min count check heuristic that has been suggested in the issue might be one solution? Or sacrificing some performance and only allowing one scale-down operation at a time for a NodeGroup?
If they work, I still hope both suggestions to be in the core than the provider in a long run. But if we are doing that in the provider for now, we should at least look at how to "reject" core calls properly without breaking state management.



Existing "proactively update" idea can have inconsistency on scale-up while waiting for an already-dead scale-down

Still, I think we should try to understand the consequence of this more. Is the user-facing problem about overscaling, disruption (e.g., by "scale-up" operations removing nodes rather than adding due to new size being lower), etc? Is eventual consistency not good enough?

@@ -558,11 +543,9 @@ func (scaleSet *ScaleSet) waitForDeleteInstances(future *azure.Future, requiredI
isSuccess, err := isSuccessHTTPResponse(httpResponse, err)
if isSuccess {
klog.V(3).Infof(".WaitForDeleteInstancesResult(%v) for %s success", requiredIds.InstanceIds, scaleSet.Name)
// No need to invalidateInstanceCache because instanceStates were proactively set to "deleting"
scaleSet.invalidateInstanceCache()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per the logic I mentioned in #7432, this cache refresh shouldn't be enough to get from CRP. There is another layer of cache we should refresh as well. Order matters.

Suggested change
scaleSet.invalidateInstanceCache()
if err := scaleSet.manager.forceRefresh(); err != nil {
klog.Errorf(...)
}
scaleSet.invalidateInstanceCache()

Could look into embedding this inside invalidateInstanceCache() itself too.

Signed-off-by: Jack Francis <jackfrancis@gmail.com>
@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Nov 11, 2024
@jackfrancis jackfrancis changed the title WIP azure: don’t eagerly update vmss cache before delete success azure: StrictCacheUpdates to disable proactive vmss cache updates Nov 11, 2024
@k8s-ci-robot k8s-ci-robot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Nov 11, 2024
@jackfrancis
Copy link
Contributor Author

@comtalyst Thanks for your review and perspective here.

Based on your analysis that there are trade-offs between both approaches, I think shipping a serialized implementation of cache updates after the Azure API has returned positive delete validation behind a configuration flag may help some customers who want to experiment with this alternate approach.

This change should have no effect on existing customers.

Copy link
Contributor

@comtalyst comtalyst left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds like a fair mitigation, but let's still keep an eye on better approaches to solve both scale down below min count and delete failure inconsistency issue.

scaleSet.invalidateInstanceCache()
if !scaleSet.manager.config.StrictCacheUpdates {
// On failure, invalidate the instanceCache - cannot have instances in deletingState
scaleSet.invalidateInstanceCache()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add cache refresh here as well?

Suggested change
scaleSet.invalidateInstanceCache()
if err := scaleSet.manager.forceRefresh(); err != nil {
klog.Errorf("forceRefresh failed with error: %v", err)
}
scaleSet.invalidateInstanceCache()

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doing so would introduce a functional change for folks not opting into this new config feature. I think keeping the current behaviors unmodified for the default use case is a priority.

@comtalyst
Copy link
Contributor

/lgtm
/approve
/hold

feel free to unhold whenever you are ready

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 11, 2024
@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Nov 11, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: comtalyst, jackfrancis

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@jackfrancis
Copy link
Contributor Author

/cherry-pick cluster-autoscaler-release-1.28
/cherry-pick cluster-autoscaler-release-1.29
/cherry-pick cluster-autoscaler-release-1.30
/cherry-pick cluster-autoscaler-release-1.31

/hold cancel

@k8s-infra-cherrypick-robot

@jackfrancis: once the present PR merges, I will cherry-pick it on top of cluster-autoscaler-release-1.28, cluster-autoscaler-release-1.29, cluster-autoscaler-release-1.30, cluster-autoscaler-release-1.31 in new PRs and assign them to you.

In response to this:

/cherry-pick cluster-autoscaler-release-1.28
/cherry-pick cluster-autoscaler-release-1.29
/cherry-pick cluster-autoscaler-release-1.30
/cherry-pick cluster-autoscaler-release-1.31

/hold cancel

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Nov 11, 2024
@k8s-ci-robot k8s-ci-robot merged commit 93f74c0 into kubernetes:master Nov 11, 2024
6 of 7 checks passed
@k8s-infra-cherrypick-robot

@jackfrancis: new pull request created: #7484

In response to this:

/cherry-pick cluster-autoscaler-release-1.28
/cherry-pick cluster-autoscaler-release-1.29
/cherry-pick cluster-autoscaler-release-1.30
/cherry-pick cluster-autoscaler-release-1.31

/hold cancel

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-infra-cherrypick-robot

@jackfrancis: new pull request created: #7485

In response to this:

/cherry-pick cluster-autoscaler-release-1.28
/cherry-pick cluster-autoscaler-release-1.29
/cherry-pick cluster-autoscaler-release-1.30
/cherry-pick cluster-autoscaler-release-1.31

/hold cancel

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-infra-cherrypick-robot

@jackfrancis: new pull request created: #7486

In response to this:

/cherry-pick cluster-autoscaler-release-1.28
/cherry-pick cluster-autoscaler-release-1.29
/cherry-pick cluster-autoscaler-release-1.30
/cherry-pick cluster-autoscaler-release-1.31

/hold cancel

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-infra-cherrypick-robot

@jackfrancis: new pull request created: #7487

In response to this:

/cherry-pick cluster-autoscaler-release-1.28
/cherry-pick cluster-autoscaler-release-1.29
/cherry-pick cluster-autoscaler-release-1.30
/cherry-pick cluster-autoscaler-release-1.31

/hold cancel

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/cluster-autoscaler area/provider/azure Issues or PRs related to azure provider cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. lgtm "Looks good to me", indicates that a PR is ready to be merged. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

CAS appears to lose track of instances in VMSS and makes incorrect scaling decisions
4 participants