-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VPA: Why using targetRef
instead of Pod selector?
#6925
Comments
We ran into this recently as well. We use argo rollouts, and have many hundreds of
|
The question why VPA requires a controller @ejholmes I guess the easiest solution here would be to raise the client-side rate limits in VPA. The defaults are quite low, so if you really put some usage on VPA, it is absolutely important to raise those limits on all 3 components, or else you will get into trouble. |
@ejholmes , wrt to #6925 (comment): we also ran into this recently. @voelzmo also created #6884 describing that the currently safeguard mechanism between the vpa-updater and vpa-admission-controller based on the lease resource does not cover each and every case that can lead to endless Pod evictions by the vpa-updater. |
I am not sure if I saw these days the earlier versions of the API were based on selector, then newer version changed it to targetRef. I didn't dig in details what was the exact motivation. Maybe there was intention to align the HPA and VPA APIs to use targetRef. But HPA's architecture is suitable for such targetRef approach. It is a single control loop in KCM, it does not need to find a matching HPA for given Pod. Having in mind the specifics of the VPA's architecture and components (the resources update mechanism through eviction and webhook mutation) the targetRef approach is not very suitable and scalable. |
👍 to the above. I've also opened #7024 to add better support for non-standard controllers. Tweaking the kube client QPS settings helped us, but unfortunately wasn't enough for us to be comfortable putting VPA in production. I understand why targetRef was chosen, but it would still be nice if users could drop down to a raw selector when needed. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
How to categorize this issue?
/area vertical-pod-autoscaler
/kind feature
Problem description:
Right now, in VPA it is expensive/ineffective to get the matching VPA for a given Pod (
GetMatchingVPA
).For example the vpa-admission-controller, on every Pod mutation:
autoscaler/vertical-pod-autoscaler/pkg/admission-controller/resource/vpa/matcher.go
Line 53 in 2bba2ba
autoscaler/vertical-pod-autoscaler/pkg/admission-controller/resource/vpa/matcher.go
Line 63 in 2bba2ba
autoscaler/vertical-pod-autoscaler/pkg/target/fetcher.go
Line 140 in c79bdaf
autoscaler/vertical-pod-autoscaler/pkg/target/fetcher.go
Line 186 in c79bdaf
FindTopMostWellKnownOrScalable
controller by tracing back the ownerReference of the Pod.FindTopMostWellKnownOrScalable
, we also GET the /scale subresource if we have a controller that is not well-known. The GET request in theFindTopMostWellKnownOrScalable
is cached and the cache is refreshed every 10mins. Negative results are cached as well.Currently, VPA requires a
targetRef
and internally resolves the targetRef to selector and then verifies that the Pod's labels matche the VPA targetRef's selector and that the Pod's top most well-known scalable controller is the one from the targetRef:It would be much more efficient if VPA instead supports in its spec Pod selector (such as Deployment/Service/NetworkPolicy and other K8s resources) and it only verifies that the VPA spec selector matches the Pod's labels:
What is the benefit of using
targetRef
over Pod selector? Why VPA has to know about thetargetRef
? Isn't it enough for VPA to be able to find the Pods for given VPA resource and vice-versa?The text was updated successfully, but these errors were encountered: