Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VPA assigns minAllowed.memory instead of actual recommendation when restarting Prometheus pod if Limitrange is set #7429

Open
MatteoMori8 opened this issue Oct 24, 2024 · 4 comments
Assignees
Labels
area/vertical-pod-autoscaler kind/bug Categorizes issue or PR as related to a bug.

Comments

@MatteoMori8
Copy link

Which component are you using?:

  • vertical-pod-autoscaler

What version of the component are you using?:

  • 1.2.1

Component version:

What k8s version are you using (kubectl version)?:

Server Version: v1.29.8-eks

What environment is this in?:

  • AWS - eks

What did you expect to happen?:
I expected VPA's AdmissionController to patch my Prometheus pod with a request.memory value equal to the memory target recommendation

What happened instead?:
Instead, the new pod got deployed with the following:

  • limit.memory got halved
  • request.memory has been assigned the value that I specified as minAllowed.memory from the VPA object itself

On the other hand, CPU got updated correctly

How to reproduce it (as minimally and precisely as possible):

VPA object
apiVersion: autoscaling.k8s.io/v1
kind: VerticalPodAutoscaler
metadata:
  name: prometheus-test
spec:
  resourcePolicy:
    containerPolicies:
    - containerName: prometheus
      controlledResources:
      - cpu
      - memory
      controlledValues: RequestsAndLimits
      maxAllowed:
        memory: 30Gi
      minAllowed:
        cpu: 100m
        memory: 500Mi
    - containerName: thanos-sidecar
      controlledResources:
      - cpu
      - memory
      controlledValues: RequestsAndLimits
      maxAllowed:
        memory: 20Gi
      minAllowed:
        memory: 100Mi
  targetRef:
    apiVersion: apps/v1
    kind: StatefulSet
    name: prometheus-test
  updatePolicy:
    updateMode: "Initial"
LimitRange
apiVersion: v1
kind: LimitRange
metadata:
  name: limits
spec:
  limits:
  - max:
      memory: 60Gi
    min:
      cpu: 1m
      memory: 50Mi
    type: Pod
  - default:
      memory: 512Mi
    defaultRequest:
      cpu: 20m
      memory: 256Mi
    min:
      memory: 50Mi
    type: Container

Anything else we need to know?:

We talked this out on the #sig-autoscaling Slack channel and it turns out that by deleting the LimitRange, VPA goes back on track and manages memory in the appropriate way.

Personally I do not know if this is a bug or an opportunity to enhance the docs but we thought it could be good to raise an Issue about it

@MatteoMori8 MatteoMori8 added the kind/bug Categorizes issue or PR as related to a bug. label Oct 24, 2024
@adrianmoisey
Copy link
Member

/area vertical-pod-autoscaler

@adrianmoisey
Copy link
Member

/assign

@adrianmoisey
Copy link
Member

Does the Prometheus statefulset define any resources?

@adrianmoisey
Copy link
Member

Instead, the new pod got deployed with the following:

  • limit.memory got halved
  • request.memory has been assigned the value that I specified as minAllowed.memory from the VPA object itself

Which container in the Pod got these set?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/vertical-pod-autoscaler kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

3 participants