-
Notifications
You must be signed in to change notification settings - Fork 4.3k
Add AEP for restoring selector support to VPA #8956
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Add AEP for restoring selector support to VPA #8956
Conversation
|
Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
Welcome @msudheendra-cflt! |
|
Hi @msudheendra-cflt. Thanks for your PR. I'm waiting for a github.com member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: msudheendra-cflt The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
This PR may require API review. If so, when the changes are ready, complete the pre-review checklist and request an API review. Status of requested reviews is tracked in the API Review project. |
Signed-off-by: Manoj Sudheendra <msudheendra@confluent.io>
9962baf to
59988f4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
First of all thanks for the AEP! I think it is important to start a discussion around workloads that have heterogeneous resource requirements under the same controller.
we already have one related AEP that tackles Daemonsets: AEP-7942: Vertical Pod Autoscaling for Daemonsets with Heterogeneous Resource Requirements. This proposal could focus on Deployments and Statefulsets, which I believe is its intention
From reading this AEP, it appears to propose solving vertical autoscaling for workloads that use the leader election pattern. The proposed solution assumes that pods under a single controller are labeled by another controller based on which pod is the leader. I think a better approach (or the first use case to address) would be to watch Kubernetes Lease objects directly
At the moment the intent of this AEP seems to be limited to cases where the leader election pattern is used for Deployments/StatefulSets. It would be great to hear what other use cases could lead to heterogeneous resource requirements within the same workload. Off the top of my head, I cannot think of many others as I mostly deal with workloads that are accessed via a Kubernetes Service.
| @@ -0,0 +1,105 @@ | |||
| # AEP-XXXX: Restore Label Selector Support to VPA | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the AEP title should change to reflect that this proposal aims to improve/solve vertical autoscaling for heterogeneous workloads when Deployments and StatefulSets controllers are used. Restoring label selector support is only one part of the solution.
so please make the intention of the AEP in its title more explicit, thanks!
| VPA aggregates metrics from all Pods in the target controller into a single histogram. This averages the usage of "high-utilization" (Leader) and "low-utilization" (Follower) pods. | ||
|
|
||
| **The Solution:** | ||
| By restoring the `selector` field, users can partition a single workload into multiple VPA profiles based on the Pod's current state: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I worry a little about restoring a field that was once there, I don't know if there are API considerations that need to be made
|
|
||
| **Current Rule**: "One VPA per TargetRef." | ||
|
|
||
| **New Rule**: "Multiple VPAs per TargetRef are allowed only if their Selectors are non-overlapping." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if we have 2 VPAs, one with selector of role=primary and another with purpose=web, and a pod were created with both of these labels, that means that both VPAs match that pod.
| 1. **VPA-Leader:** Selects `role=leader` | ||
| 2. **VPA-Follower:** Selects `role=follower` | ||
|
|
||
| When a Pod promotes from Follower to Leader, its label changes, and it effectively migrates from the Follower VPA to the Leader VPA instantly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How does this solution integrate with controllers?
What type of PR is this?
/kind feature
/kind api-change
What this PR does / why we need it:
Created as a follow-up of #8848. This is an AEP for supporting VPA for heterogenous workloads (such as leader/follower)