Skip to content

[Bug]: Missing support for horizontal pod autoscaling with NIC5 using NAP5 #7800

Open
@jjngx

Description

@jjngx

Version

v5.0.0

What Kubernetes platforms are you running on?

Kind

Steps to reproduce

On behalf of @fabriziofiorucci :

Q: do we support horizontal pod autoscaling with NIC 5 using NAP 5? I deployed the whole thing using:

helm install nic oci://ghcr.io/nginx/charts/nginx-ingress \
  --version 2.1.0 \
  --set controller.image.repository=private-registry.nginx.com/nginx-ic-nap-v5/nginx-plus-ingress \
  --set controller.image.tag=5.0.0 \
  --set controller.nginxplus=true \
  --set controller.appprotect.enable=true \
  --set controller.appprotect.v5=true \
  --set-json 'controller.appprotect.volumes=[
    {"name":"app-protect-bd-config","emptyDir":{}},
    {"name":"app-protect-config","emptyDir":{}},
    {"name":"app-protect-bundles","persistentVolumeClaim":{"claimName":"app-protect-bundles"}}
  ]' \
  --set controller.serviceAccount.imagePullSecretName=regcred \
  --set controller.mgmt.licenseTokenSecretName=license-token \
  --set controller.service.type=LoadBalancer \
  --set 'controller.volumeMounts[0].name=app-protect-bundles' \
  --set 'controller.volumeMounts[0].mountPath=/etc/app_protect/bundles/' \
  --set controller.resources.limits.cpu=1 \
  --set controller.resources.limits.memory=8Gi \
  --set controller.autoscaling.enabled=true \
  --set controller.autoscaling.minReplicas=1 \
  --set controller.autoscaling.maxReplicas=4 \
  --set controller.autoscaling.targetCPUUtilizationPercentage=60 \
  -n nginx-ingress

The metrics API is running as expected:

$ kubectl top pods -n nginx-ingress
NAME                                           CPU(cores)   MEMORY(bytes)   
nic-nginx-ingress-controller-65b9d887d-4n2rc   65m          287Mi  

NIC metrics are collected as expected:

$ kubectl get --raw "/apis/metrics.k8s.io/v1beta1/pods"|jq
{
  "kind": "PodMetricsList",
  "apiVersion": "metrics.k8s.io/v1beta1",
[...]
    {
      "metadata": {
        "name": "nic-nginx-ingress-controller-65b9d887d-4n2rc",
        "namespace": "nginx-ingress",
        "creationTimestamp": "2025-05-15T08:38:34Z",
        "labels": {
          "app.kubernetes.io/instance": "nic",
          "app.kubernetes.io/name": "nginx-ingress",
          "app.kubernetes.io/version": "5.0.0",
          "app.nginx.org/version": "1.27.4-nginx-plus-r34",
          "appprotect.f5.com/version": "5.6.0",
          "pod-template-hash": "65b9d887d"
        }
      },
      "timestamp": "2025-05-15T08:38:19Z",
      "window": "17.449s",
      "containers": [
        {
          "name": "waf-config-mgr",
          "usage": {
            "cpu": "928941n",
            "memory": "7660Ki"
          }
        },
        {
          "name": "waf-enforcer",
          "usage": {
            "cpu": "65197719n",
            "memory": "247772Ki"
          }
        },
        {
          "name": "nginx-ingress",
          "usage": {
            "cpu": "1996249n",
            "memory": "38280Ki"
          }
        }
      ]
    },
[...]
}

The HPA (horizontal pod autoscaler) gets created but apparently can't properly get CPU/RAM metrics.

$ kubectl describe hpa nic-nginx-ingress-controller -n nginx-ingress
Name:                                                     nic-nginx-ingress-controller
Namespace:                                                nginx-ingress
Labels:                                                   app.kubernetes.io/instance=nic
                                                          app.kubernetes.io/managed-by=Helm
                                                          app.kubernetes.io/name=nginx-ingress
                                                          app.kubernetes.io/version=5.0.0
                                                          helm.sh/chart=nginx-ingress-2.1.0
Annotations:                                              meta.helm.sh/release-name: nic
                                                          meta.helm.sh/release-namespace: nginx-ingress
CreationTimestamp:                                        Thu, 15 May 2025 08:32:17 +0000
Reference:                                                Deployment/nic-nginx-ingress-controller
Metrics:                                                  ( current / target )
  resource memory on pods  (as a percentage of request):  <unknown> / 50%
  resource cpu on pods  (as a percentage of request):     <unknown> / 60%
Min replicas:                                             1
Max replicas:                                             4
Deployment pods:                                          1 current / 0 desired
Conditions:
  Type           Status  Reason                   Message
  ----           ------  ------                   -------
  AbleToScale    True    SucceededGetScale        the HPA controller was able to get the target's current scale
  ScalingActive  False   FailedGetResourceMetric  the HPA was unable to compute the replica count: failed to get memory utilization: missing request for memory in container waf-enforcer of Pod nic-nginx-ingress-controller-65b9d887d-4n2rc
Events:
  Type     Reason                        Age               From                       Message
  ----     ------                        ----              ----                       -------
  Warning  FailedGetResourceMetric       97s               horizontal-pod-autoscaler  failed to get memory utilization: unable to get metrics for resource memory: no metrics returned from resource metrics API
  Warning  FailedGetResourceMetric       97s               horizontal-pod-autoscaler  failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API
  Warning  FailedComputeMetricsReplicas  97s               horizontal-pod-autoscaler  invalid metrics (2 invalid out of 2), first error is: failed to get memory resource metric value: failed to get memory utilization: unable to get metrics for resource memory: no metrics returned from resource metrics API
  Warning  FailedGetResourceMetric       82s               horizontal-pod-autoscaler  failed to get cpu utilization: did not receive metrics for targeted pods (pods might be unready)
  Warning  FailedGetResourceMetric       7s (x6 over 82s)  horizontal-pod-autoscaler  failed to get memory utilization: missing request for memory in container waf-enforcer of Pod nic-nginx-ingress-controller-65b9d887d-4n2rc
  Warning  FailedComputeMetricsReplicas  7s (x6 over 82s)  horizontal-pod-autoscaler  invalid metrics (2 invalid out of 2), first error is: failed to get memory resource metric value: failed to get memory utilization: missing request for memory in container waf-enforcer of Pod nic-nginx-ingress-controller-65b9d887d-4n2rc
  Warning  FailedGetResourceMetric       7s (x5 over 67s)  horizontal-pod-autoscaler  failed to get cpu utilization: missing request for cpu in container waf-enforcer of Pod nic-nginx-ingress-controller-65b9d887d-4n2rc

Since the helm chart doesn't seem to support setting resource (limits and requests) on a per-container basis (nginx-ingress, waf-enforcer and waf-config-mgr), such resources are not set in the Deployment manifest, which makes it impossible for the HPA to collect CPU/RAM usage.

This missing bit in the helm chart seems to prevent users from running horizontal pod scaling.

cc @shaun-nx

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugAn issue reporting a potential bugenhancementPull requests for new features/feature enhancements

    Type

    No type

    Projects

    Status

    Prioritized backlog

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions