-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for frequent loops when provisioningrequest is encountered in last iteration #7271
Conversation
Hi @Duke0404. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/ok-to-test @kawych will you be able to review? |
@aleksandra-malinowska I can review later today |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have a small comment, otherwise LGTM
2f629ac
to
169b99c
Compare
7fe3b40
to
fc3ca6b
Compare
/lgtm |
9ba7e09
to
9cfa863
Compare
cluster-autoscaler/loop/trigger.go
Outdated
t.initialized = true | ||
} | ||
|
||
// provisioningRequestWasProcessed is used to check if provisioningRequestProcessTimeGetter is not nil and a provisioning request was processed in the last iteration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: pls remove the comments here and below. Comments are not required for private function and these functions are short and self-explanatory enough to not require extra insight.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed comment here but kept the comment on triggerNextIteration, because the behaviour of the function is not entirely self-explanatory imo.
/lgtm |
/lgtm |
cluster-autoscaler/loop/trigger.go
Outdated
klog.Infof("Autoscaler loop triggered immediately after a productive iteration") | ||
} | ||
return | ||
t.triggerNextIteration("Autoscaler loop triggered immediately after a productive iteration") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't an iteration that processed a provisioning request also "productive"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not necessarily, because a ProvisioningRequest can be marked as failed and we will still trigger the next loop immidiately
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, though my point is it maybe makes sense to be a bit more explicit about the reason, as "productive" can mean different things. Autoscaler loop triggered immediately after scale up
/Autoscaler loop triggered immediately after scale down
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added separate logs for scale up and scaled own.
cluster-autoscaler/loop/trigger.go
Outdated
} | ||
} | ||
|
||
// Initialize initializes the LoopTrigger object by providing a pointer to the UnschedulablePodObserver | ||
func (t *LoopTrigger) Initialize(podObserver *UnschedulablePodObserver) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the benefit of splitting initialization into 2 phases? This comes with additional complexity like the need to suddenly handle errors when waiting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The aim was to make minimal changes to the args and return values of the buildAutoscaler
function.
The trigger can only be initialized within the buildAutoscaler
function as the injector is present there. @yaroslava-serdiuk felt that creating the injector in the run
function and passing that to buildAutoscaler
was not good because the injector is only a relevant component for CA if the user has ProvisioningRequests enabled and thus should not be a part of the buildAutoscaler
args.
The podObserver
can only be created within the run
function as it requires the background context of the function. Therefore, having an initialize method which serves as a setter for the podObserver
was deemed as the best solution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for sharing the background!
I think the context can safely be created before the call to buildAutoscaler
and passed there - you can then remove two-phase init and actually simplify podObserver
creation a bit too by reusing autoscaling options available there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Modified accordingly.
…d in last iteration
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: Duke0404, x13n The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Created lastProvisioningRequestSeenTime which gets updated whenever a provisioningrequest is encountered in an iteration, which is used when frequent loops is enabled to start next iteration without delay.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:
cc: @yaroslava-serdiuk @aleksandra-malinowska @kawych