-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
azure: StrictCacheUpdates to disable proactive vmss cache updates #7481
azure: StrictCacheUpdates to disable proactive vmss cache updates #7481
Conversation
Signed-off-by: Jack Francis <jackfrancis@gmail.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like this is moving away from the existing idea of "proactively update size on deletion start, then invalidate cache if deletion fails" and going for "update size when deletion is complete".
I think both ideas have their own issues by nature, both from relying on an outdated assumption/prediction:
- Existing "proactively update" idea can have inconsistency on scale-up while waiting for an already-dead scale-down
- Inconsistency after dead scale-down should be solved by refreshing cache on all layers
- New idea is more prone to scaling down below min count on scale-down while waiting for another scale-down to complete
Maybe incorporating the min count check heuristic that has been suggested in the issue might be one solution? Or sacrificing some performance and only allowing one scale-down operation at a time for a NodeGroup?
If they work, I still hope both suggestions to be in the core than the provider in a long run. But if we are doing that in the provider for now, we should at least look at how to "reject" core calls properly without breaking state management.
Existing "proactively update" idea can have inconsistency on scale-up while waiting for an already-dead scale-down
Still, I think we should try to understand the consequence of this more. Is the user-facing problem about overscaling, disruption (e.g., by "scale-up" operations removing nodes rather than adding due to new size being lower), etc? Is eventual consistency not good enough?
@@ -558,11 +543,9 @@ func (scaleSet *ScaleSet) waitForDeleteInstances(future *azure.Future, requiredI | |||
isSuccess, err := isSuccessHTTPResponse(httpResponse, err) | |||
if isSuccess { | |||
klog.V(3).Infof(".WaitForDeleteInstancesResult(%v) for %s success", requiredIds.InstanceIds, scaleSet.Name) | |||
// No need to invalidateInstanceCache because instanceStates were proactively set to "deleting" | |||
scaleSet.invalidateInstanceCache() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per the logic I mentioned in #7432, this cache refresh shouldn't be enough to get from CRP. There is another layer of cache we should refresh as well. Order matters.
scaleSet.invalidateInstanceCache() | |
if err := scaleSet.manager.forceRefresh(); err != nil { | |
klog.Errorf(...) | |
} | |
scaleSet.invalidateInstanceCache() |
Could look into embedding this inside invalidateInstanceCache()
itself too.
Signed-off-by: Jack Francis <jackfrancis@gmail.com>
@comtalyst Thanks for your review and perspective here. Based on your analysis that there are trade-offs between both approaches, I think shipping a serialized implementation of cache updates after the Azure API has returned positive delete validation behind a configuration flag may help some customers who want to experiment with this alternate approach. This change should have no effect on existing customers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds like a fair mitigation, but let's still keep an eye on better approaches to solve both scale down below min count and delete failure inconsistency issue.
scaleSet.invalidateInstanceCache() | ||
if !scaleSet.manager.config.StrictCacheUpdates { | ||
// On failure, invalidate the instanceCache - cannot have instances in deletingState | ||
scaleSet.invalidateInstanceCache() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add cache refresh here as well?
scaleSet.invalidateInstanceCache() | |
if err := scaleSet.manager.forceRefresh(); err != nil { | |
klog.Errorf("forceRefresh failed with error: %v", err) | |
} | |
scaleSet.invalidateInstanceCache() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Doing so would introduce a functional change for folks not opting into this new config feature. I think keeping the current behaviors unmodified for the default use case is a priority.
/lgtm feel free to unhold whenever you are ready |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: comtalyst, jackfrancis The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cherry-pick cluster-autoscaler-release-1.28 /hold cancel |
@jackfrancis: once the present PR merges, I will cherry-pick it on top of In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@jackfrancis: new pull request created: #7484 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@jackfrancis: new pull request created: #7485 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@jackfrancis: new pull request created: #7486 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@jackfrancis: new pull request created: #7487 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What type of PR is this?
/kind bug
What this PR does / why we need it:
This PR adds a
StrictCacheUpdates
configuration for the Azure cluster-autoscaler provider (triggered by theAZURE_STRICT_CACHE_UPDATES
runtime environment variable) that enables a modified Azure VMSS delete flow so that the local CA cache representation of node pool replica count is only updated upon a successful delete.The current predictive cache update implementation was originally implemented here:
The above works in most cases, but in situations where VMSS delete operations repeatedly fail, we see the the local CA cache gradually decrementing itself during each subsequent delete operation, which has the side-effect of eventually sending a VMSS replica update with a significantly reduced replica count when Azure API errors cease.
Which issue(s) this PR fixes:
Fixes #7432
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: