Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

nginx-ingress controller with autoscaler enabled, immediately scale up to maximum replicas amount #10178

Open
cvallesi-kainos opened this issue Jul 5, 2023 · 8 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-priority triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@cvallesi-kainos
Copy link

cvallesi-kainos commented Jul 5, 2023

What happened:

Autoscaling seems to scale to maximum capacity as soon as the ingress controller is deployed.

What you expected to happen:

Not seeing the ingress scale immediately.

NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):


NGINX Ingress controller
Release: v1.8.1
Build: dc88dce
Repository: https://github.com/kubernetes/ingress-nginx
nginx version: nginx/1.21.6


Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"27", GitVersion:"v1.27.1", GitCommit:"4c9411232e10168d7b050c49a1b59f6df9d7ea4b", GitTreeState:"clean", BuildDate:"2023-04-14T13:21:19Z", GoVersion:"go1.20.3", Compiler:"gc", Platform:"linux/amd64"}
Kustomize Version: v5.0.1
Server Version: version.Info{Major:"1", Minor:"25", GitVersion:"v1.25.6", GitCommit:"94c50547e633f1db5d4c56b2b305670e14987d59", GitTreeState:"clean", BuildDate:"2023-06-12T18:46:30Z", GoVersion:"go1.19.5", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration:
  • OS (e.g. from /etc/os-release): Alpine Linux v3.18
  • Kernel (e.g. uname -a): Linux nginx-ingress-ingress-nginx-controller-7c9f44b5f8-z5z24 5.15.0-1040-azure Fix the hyperlink in CONTRIBUTING.md for issue search #47-Ubuntu SMP Thu Jun 1 19:38:24 UTC 2023 x86_64 Linux
  • Install tools: Created via Github Actions and terraform. Tested also creating "manually" a cluster via Azure Dashboard.
  • Basic cluster related info:
    • kubectl version: See above
    • kubectl get nodes -o wide:
NAME                                STATUS   ROLES   AGE    VERSION   INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
aks-agentpool-15100971-vmss000000   Ready    agent   151m   v1.25.6   10.224.0.6    <none>        Ubuntu 22.04.2 LTS   5.15.0-1040-azure   containerd://1.7.1+azure-1
  • How was the ingress-nginx-controller installed:
    • If helm was used then please show output of helm ls -A | grep -i ingress
nginx-ingress   default         10               2023-07-05 11:06:13.432252818 +0100 BST deployed        ingress-nginx-4.7.0     1.8.0
  • Current State of the controller:
    • kubectl describe ingressclasses
Name:         nginx
Labels:       app.kubernetes.io/component=controller
              app.kubernetes.io/instance=nginx-ingress
              app.kubernetes.io/managed-by=Helm
              app.kubernetes.io/name=ingress-nginx
              app.kubernetes.io/part-of=ingress-nginx
              app.kubernetes.io/version=1.8.0
              helm.sh/chart=ingress-nginx-4.7.0
Annotations:  meta.helm.sh/release-name: nginx-ingress
              meta.helm.sh/release-namespace: default
Controller:   k8s.io/ingress-nginx
Events:       <none>
  • kubectl -n <ingresscontrollernamespace> get all -A -o wide
NAMESPACE         NAME                                                          READY   STATUS    RESTARTS   AGE    IP            NODE                                NOMINATED NODE   READINESS GATES
calico-system     pod/calico-kube-controllers-684bbcff79-26pcn                  1/1     Running   0          135m   10.244.2.10   aks-agentpool-15100971-vmss000000   <none>           <none>
calico-system     pod/calico-node-lq2sj                                         1/1     Running   0          159m   10.224.0.6    aks-agentpool-15100971-vmss000000   <none>           <none>
calico-system     pod/calico-typha-59f86d8879-wst8h                             1/1     Running   0          135m   10.224.0.6    aks-agentpool-15100971-vmss000000   <none>           <none>
default           pod/nginx-ingress-ingress-nginx-controller-7c9f44b5f8-fzngv   1/1     Running   0          84m    10.244.2.89   aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/cloud-node-manager-kgcjh                                  1/1     Running   0          160m   10.224.0.6    aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/coredns-autoscaler-69b7556b86-sprkt                       1/1     Running   0          135m   10.244.2.11   aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/coredns-fb6b9d95f-bc6vz                                   1/1     Running   0          135m   10.244.2.9    aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/coredns-fb6b9d95f-qgmkv                                   1/1     Running   0          134m   10.244.2.12   aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/csi-azuredisk-node-n57j7                                  3/3     Running   0          160m   10.224.0.6    aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/csi-azurefile-node-d7nb8                                  3/3     Running   0          160m   10.224.0.6    aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/konnectivity-agent-694c59778-fhd2g                        1/1     Running   0          153m   10.244.2.3    aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/konnectivity-agent-694c59778-xfxh5                        1/1     Running   0          153m   10.244.2.2    aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/kube-proxy-gnppn                                          1/1     Running   0          160m   10.224.0.6    aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/metrics-server-67db6db9b5-dvvvq                           2/2     Running   0          131m   10.244.2.13   aks-agentpool-15100971-vmss000000   <none>           <none>
kube-system       pod/metrics-server-67db6db9b5-lc9nf                           2/2     Running   0          131m   10.244.2.14   aks-agentpool-15100971-vmss000000   <none>           <none>
tigera-operator   pod/tigera-operator-6db9d9c5d9-72mg5                          1/1     Running   0          135m   10.224.0.6    aks-agentpool-15100971-vmss000000   <none>           <none>

NAMESPACE       NAME                                                       TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE    SELECTOR
calico-system   service/calico-kube-controllers-metrics                    ClusterIP      10.0.182.56   <none>        9094/TCP                     158m   k8s-app=calico-kube-controllers
calico-system   service/calico-typha                                       ClusterIP      10.0.60.210   <none>        5473/TCP                     159m   k8s-app=calico-typha
default         service/kubernetes                                         ClusterIP      10.0.0.1      <none>        443/TCP                      161m   <none>
default         service/nginx-ingress-ingress-nginx-controller             LoadBalancer   10.0.71.84    20.26.39.76   80:31354/TCP,443:32267/TCP   129m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx
default         service/nginx-ingress-ingress-nginx-controller-admission   ClusterIP      10.0.206.31   <none>        443/TCP                      129m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx
kube-system     service/kube-dns                                           ClusterIP      10.0.0.10     <none>        53/UDP,53/TCP                160m   k8s-app=kube-dns
kube-system     service/metrics-server                                     ClusterIP      10.0.4.149    <none>        443/TCP                      160m   k8s-app=metrics-server

NAMESPACE       NAME                                        DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR              AGE    CONTAINERS
        IMAGES
          SELECTOR
calico-system   daemonset.apps/calico-node                  1         1         1       1            1           kubernetes.io/os=linux     159m   calico-node
        mcr.microsoft.com/oss/calico/node:v3.24.0
          k8s-app=calico-node
calico-system   daemonset.apps/calico-windows-upgrade       0         0         0       0            0           kubernetes.io/os=windows   159m   calico-windows-upgrade
        mcr.microsoft.com/oss/calico/windows-upgrade:v3.24.0
          k8s-app=calico-windows-upgrade
kube-system     daemonset.apps/cloud-node-manager           1         1         1       1            1           <none>                     160m   cloud-node-manager
        mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v1.25.15
          k8s-app=cloud-node-manager
kube-system     daemonset.apps/cloud-node-manager-windows   0         0         0       0            0           <none>                     160m   cloud-node-manager
        mcr.microsoft.com/oss/kubernetes/azure-cloud-node-manager:v1.25.15
          k8s-app=cloud-node-manager-windows
kube-system     daemonset.apps/csi-azuredisk-node           1         1         1       1            1           <none>                     160m   liveness-probe,node-driver-registrar,azuredisk   mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.10.0,mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.8.0,mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.26.5   app=csi-azuredisk-node
kube-system     daemonset.apps/csi-azuredisk-node-win       0         0         0       0            0           <none>                     160m   liveness-probe,node-driver-registrar,azuredisk   mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.10.0,mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.8.0,mcr.microsoft.com/oss/kubernetes-csi/azuredisk-csi:v1.26.5   app=csi-azuredisk-node-win
kube-system     daemonset.apps/csi-azurefile-node           1         1         1       1            1           <none>                     160m   liveness-probe,node-driver-registrar,azurefile   mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.10.0,mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.8.0,mcr.microsoft.com/oss/kubernetes-csi/azurefile-csi:v1.24.2   app=csi-azurefile-node
kube-system     daemonset.apps/csi-azurefile-node-win       0         0         0       0            0           <none>                     160m   liveness-probe,node-driver-registrar,azurefile   mcr.microsoft.com/oss/kubernetes-csi/livenessprobe:v2.10.0,mcr.microsoft.com/oss/kubernetes-csi/csi-node-driver-registrar:v2.8.0,mcr.microsoft.com/oss/kubernetes-csi/azurefile-csi:v1.24.2   app=csi-azurefile-node-win
kube-system     daemonset.apps/kube-proxy                   1         1         1       1            1           <none>                     160m   kube-proxy
        mcr.microsoft.com/oss/kubernetes/kube-proxy:v1.25.6-hotfix.20230612
          component=kube-proxy,tier=node

NAMESPACE         NAME                                                     READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS                          IMAGES
                                                                                     SELECTOR
calico-system     deployment.apps/calico-kube-controllers                  1/1     1            1           159m   calico-kube-controllers             mcr.microsoft.com/oss/calico/kube-controllers:v3.24.0                                                                     k8s-app=calico-kube-controllers
calico-system     deployment.apps/calico-typha                             1/1     1            1           159m   calico-typha                        mcr.microsoft.com/oss/calico/typha:v3.24.0                                                                                k8s-app=calico-typha
default           deployment.apps/nginx-ingress-ingress-nginx-controller   1/1     1            1           129m   controller                          registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx
kube-system       deployment.apps/coredns                                  2/2     2            2           160m   coredns                             mcr.microsoft.com/oss/kubernetes/coredns:v1.9.4                                                                           k8s-app=kube-dns,version=v20
kube-system       deployment.apps/coredns-autoscaler                       1/1     1            1           160m   autoscaler                          mcr.microsoft.com/oss/kubernetes/autoscaler/cluster-proportional-autoscaler:1.8.5.3                                       k8s-app=coredns-autoscaler
kube-system       deployment.apps/konnectivity-agent                       2/2     2            2           160m   konnectivity-agent                  mcr.microsoft.com/oss/kubernetes/apiserver-network-proxy/agent:v0.0.33-hotfix.20221110                                    app=konnectivity-agent
kube-system       deployment.apps/metrics-server                           2/2     2            2           160m   metrics-server-vpa,metrics-server   mcr.microsoft.com/oss/kubernetes/autoscaler/addon-resizer:1.8.14,mcr.microsoft.com/oss/kubernetes/metrics-server:v0.6.3   k8s-app=metrics-server
tigera-operator   deployment.apps/tigera-operator                          1/1     1            1           160m   tigera-operator                     mcr.microsoft.com/oss/tigera/operator:v1.28.0                                                                             name=tigera-operator

NAMESPACE         NAME                                                                DESIRED   CURRENT   READY   AGE    CONTAINERS                          IMAGES
                                                                                           SELECTOR
calico-system     replicaset.apps/calico-kube-controllers-684bbcff79                  1         1         1       159m   calico-kube-controllers             mcr.microsoft.com/oss/calico/kube-controllers:v3.24.0                                                                     k8s-app=calico-kube-controllers,pod-template-hash=684bbcff79
calico-system     replicaset.apps/calico-typha-59f86d8879                             1         1         1       159m   calico-typha                        mcr.microsoft.com/oss/calico/typha:v3.24.0                                                                                k8s-app=calico-typha,pod-template-hash=59f86d8879
default           replicaset.apps/nginx-ingress-ingress-nginx-controller-75f585d85c   0         0         0       92m    controller                          registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx,pod-template-hash=75f585d85c
default           replicaset.apps/nginx-ingress-ingress-nginx-controller-7c9f44b5f8   1         1         1       129m   controller                          registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7c9f44b5f8
default           replicaset.apps/nginx-ingress-ingress-nginx-controller-84bf68bf66   0         0         0       122m   controller                          registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx,pod-template-hash=84bf68bf66
default           replicaset.apps/nginx-ingress-ingress-nginx-controller-84c6679d7    0         0         0       100m   controller                          registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx,pod-template-hash=84c6679d7
kube-system       replicaset.apps/coredns-autoscaler-69b7556b86                       1         1         1       160m   autoscaler                          mcr.microsoft.com/oss/kubernetes/autoscaler/cluster-proportional-autoscaler:1.8.5.3                                       k8s-app=coredns-autoscaler,pod-template-hash=69b7556b86
kube-system       replicaset.apps/coredns-fb6b9d95f                                   2         2         2       160m   coredns                             mcr.microsoft.com/oss/kubernetes/coredns:v1.9.4                                                                           k8s-app=kube-dns,pod-template-hash=fb6b9d95f,version=v20
kube-system       replicaset.apps/konnectivity-agent-694c59778                        2         2         2       153m   konnectivity-agent                  mcr.microsoft.com/oss/kubernetes/apiserver-network-proxy/agent:v0.0.33-hotfix.20221110                                    app=konnectivity-agent,pod-template-hash=694c59778
kube-system       replicaset.apps/konnectivity-agent-79f9756b76                       0         0         0       160m   konnectivity-agent                  mcr.microsoft.com/oss/kubernetes/apiserver-network-proxy/agent:v0.0.33-hotfix.20221110                                    app=konnectivity-agent,pod-template-hash=79f9756b76
kube-system       replicaset.apps/metrics-server-5dd7f7965f                           0         0         0       158m   metrics-server-vpa,metrics-server   mcr.microsoft.com/oss/kubernetes/autoscaler/addon-resizer:1.8.14,mcr.microsoft.com/oss/kubernetes/metrics-server:v0.6.3   k8s-app=metrics-server,pod-template-hash=5dd7f7965f
kube-system       replicaset.apps/metrics-server-67db6db9b5                           2         2         2       131m   metrics-server-vpa,metrics-server   mcr.microsoft.com/oss/kubernetes/autoscaler/addon-resizer:1.8.14,mcr.microsoft.com/oss/kubernetes/metrics-server:v0.6.3   k8s-app=metrics-server,pod-template-hash=67db6db9b5
kube-system       replicaset.apps/metrics-server-845978bcd7                           0         0         0       146m   metrics-server-vpa,metrics-server   mcr.microsoft.com/oss/kubernetes/autoscaler/addon-resizer:1.8.14,mcr.microsoft.com/oss/kubernetes/metrics-server:v0.6.3   k8s-app=metrics-server,pod-template-hash=845978bcd7
tigera-operator   replicaset.apps/tigera-operator-6db9d9c5d9                          1         1         1       160m   tigera-operator                     mcr.microsoft.com/oss/tigera/operator:v1.28.0
  • kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
Name:             nginx-ingress-ingress-nginx-controller-7c9f44b5f8-fzngv
Namespace:        default
Priority:         0
Service Account:  nginx-ingress-ingress-nginx
Node:             aks-agentpool-15100971-vmss000000/10.224.0.6
Start Time:       Wed, 05 Jul 2023 11:07:33 +0100
Labels:           app.kubernetes.io/component=controller
                  app.kubernetes.io/instance=nginx-ingress
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=ingress-nginx
                  app.kubernetes.io/part-of=ingress-nginx
                  app.kubernetes.io/version=1.8.0
                  helm.sh/chart=ingress-nginx-4.7.0
                  pod-template-hash=7c9f44b5f8
Annotations:      cni.projectcalico.org/containerID: 966af3502d17abdccacba182aab4cbf1937a915fe777bb68ee6f3d7c32745d55
                  cni.projectcalico.org/podIP: 10.244.2.89/32
                  cni.projectcalico.org/podIPs: 10.244.2.89/32
Status:           Running
IP:               10.244.2.89
IPs:
  IP:           10.244.2.89
Controlled By:  ReplicaSet/nginx-ingress-ingress-nginx-controller-7c9f44b5f8
Containers:
  controller:
    Container ID:  containerd://2a6c9f37916044f9729cee4b075232d9c05963aaaab7d7f0a1ad4e9da56d64a8
    Image:         registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd
    Image ID:      registry.k8s.io/ingress-nginx/controller@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd
    Ports:         80/TCP, 443/TCP, 8443/TCP
    Host Ports:    0/TCP, 0/TCP, 0/TCP
    Args:
      /nginx-ingress-controller
      --publish-service=$(POD_NAMESPACE)/nginx-ingress-ingress-nginx-controller
      --election-id=nginx-ingress-ingress-nginx-leader
      --controller-class=k8s.io/ingress-nginx
      --ingress-class=nginx
      --configmap=$(POD_NAMESPACE)/nginx-ingress-ingress-nginx-controller
      --validating-webhook=:8443
      --validating-webhook-certificate=/usr/local/certificates/cert
      --validating-webhook-key=/usr/local/certificates/key
    State:          Running
      Started:      Wed, 05 Jul 2023 11:07:34 +0100
    Ready:          True
    Restart Count:  0
    Limits:
      cpu:     100m
      memory:  192Mi
    Requests:
      cpu:      100m
      memory:   128Mi
    Liveness:   http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=5
    Readiness:  http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
    Environment:
      POD_NAME:       nginx-ingress-ingress-nginx-controller-7c9f44b5f8-fzngv (v1:metadata.name)
      POD_NAMESPACE:  default (v1:metadata.namespace)
      LD_PRELOAD:     /usr/local/lib/libmimalloc.so
    Mounts:
      /usr/local/certificates/ from webhook-cert (ro)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4fftq (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  webhook-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  nginx-ingress-ingress-nginx-admission
    Optional:    false
  kube-api-access-4fftq:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node.kubernetes.io/memory-pressure:NoSchedule op=Exists
                             node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:                      <none>
  • kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Name:                     nginx-ingress-ingress-nginx-controller
Namespace:                default
Labels:                   app.kubernetes.io/component=controller
                          app.kubernetes.io/instance=nginx-ingress
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=ingress-nginx
                          app.kubernetes.io/part-of=ingress-nginx
                          app.kubernetes.io/version=1.8.0
                          helm.sh/chart=ingress-nginx-4.7.0
Annotations:              meta.helm.sh/release-name: nginx-ingress
                          meta.helm.sh/release-namespace: default
Selector:                 app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.0.71.84
IPs:                      10.0.71.84
LoadBalancer Ingress:     20.26.39.76
Port:                     http  80/TCP
TargetPort:               http/TCP
NodePort:                 http  31354/TCP
Endpoints:                10.244.2.89:80
Port:                     https  443/TCP
TargetPort:               https/TCP
NodePort:                 https  32267/TCP
Endpoints:                10.244.2.89:443
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
  • Current state of ingress object, if applicable:
    • kubectl -n <appnnamespace> get all,ing -o wide
NAME                                                          READY   STATUS    RESTARTS   AGE   IP            NODE                                NOMINATED NODE   READINESS GATES
pod/nginx-ingress-ingress-nginx-controller-7c9f44b5f8-fzngv   1/1     Running   0          86m   10.244.2.89   aks-agentpool-15100971-vmss000000   <none>           <none>

NAME                                                       TYPE           CLUSTER-IP    EXTERNAL-IP   PORT(S)                      AGE    SELECTOR
service/kubernetes                                         ClusterIP      10.0.0.1      <none>        443/TCP                      164m   <none>
service/nginx-ingress-ingress-nginx-controller             LoadBalancer   10.0.71.84    20.26.39.76   80:31354/TCP,443:32267/TCP   132m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx
service/nginx-ingress-ingress-nginx-controller-admission   ClusterIP      10.0.206.31   <none>        443/TCP                      132m   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx

NAME                                                     READY   UP-TO-DATE   AVAILABLE   AGE    CONTAINERS   IMAGES
                                            SELECTOR
deployment.apps/nginx-ingress-ingress-nginx-controller   1/1     1            1           132m   controller   registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx

NAME                                                                DESIRED   CURRENT   READY   AGE    CONTAINERS   IMAGES
                                                  SELECTOR
replicaset.apps/nginx-ingress-ingress-nginx-controller-75f585d85c   0         0         0       95m    controller   registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx,pod-template-hash=75f585d85c
replicaset.apps/nginx-ingress-ingress-nginx-controller-7c9f44b5f8   1         1         1       132m   controller   registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx,pod-template-hash=7c9f44b5f8
replicaset.apps/nginx-ingress-ingress-nginx-controller-84bf68bf66   0         0         0       125m   controller   registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx,pod-template-hash=84bf68bf66
replicaset.apps/nginx-ingress-ingress-nginx-controller-84c6679d7    0         0         0       102m   controller   registry.k8s.io/ingress-nginx/controller:v1.8.1@sha256:e5c4824e7375fcf2a393e1c03c293b69759af37a9ca6abdb91b13d78a93da8bd   app.kubernetes.io/component=controller,app.kubernetes.io/instance=nginx-ingress,app.kubernetes.io/name=ingress-nginx,pod-template-hash=84c6679d7
  • kubectl -n <appnamespace> describe ing <ingressname>

  • If applicable, then, your complete and exact curl/grpcurl command (redacted if required) and the reponse to the curl/grpcurl command with the -v flag

  • Others:

    • Any other related information like ;
      • copy/paste of the snippet (if applicable)
      • kubectl describe ... of any custom configmap(s) created and in use
      • Any other related information that may help

How to reproduce this issue:

This issue has been tested and reproducible 100% of the time on Azure Kubernetes Service

First step would be do deploy an AKS service with nodes Standard_B8ms or higher. Lower class nodes don't seem to have this problem.

Simply enable scaling when installing the chart and you should see the behaviour reported.

helm install nginx-ingress ingress-nginx/ingress-nginx --set controller.autoscaling.enabled=true

Anything else we need to know:

I encountered this anomaly on a cluster with B12ms VMs used as nodes and started testing possible causes. Noticed that up until B4ms this does not happen. I can't figure out why the exact same configuration misbehave on nodes with VM with more memory available but what seems to happen is that the pod is deployed and as soon as I can get some metrics out, it's RAM usage is > 80% which trigger the deployment of new replicas.

The initial cluster I have made aware of the issue had only nginx, cert-manager, grafana, prometheus and loki deployed.
After some consideration and experiments I deployed a new cluster from scratch and deployed only nginx-ingress in it via helm, the behaviour kept happening in the same way.

I tried increasing/lowering maxReplica and it always deploys all available replicas.
I also tried enabling the explicit scaleUp and scaleDown policies in the chart, still, it only increases the amount of replicas and never scale them down.

During my tests I also tried using the two previous version of the helm chart (corresponding to the app versions 1.7.1 and 1.8.0) and the behaviour was the same.

If someone can check if it's happening with other cloud providers when using similar hardware for the nodes, it could help understanding if I need instead to go to Microsoft asking for clarifications.

@cvallesi-kainos cvallesi-kainos added the kind/bug Categorizes issue or PR as related to a bug. label Jul 5, 2023
@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority labels Jul 5, 2023
@longwuyuan
Copy link
Contributor

/remove-kind bug
/triage needs-information

show helm get values <helmreleasename>

@k8s-ci-robot k8s-ci-robot added triage/needs-information Indicates an issue needs more information in order to work on it. needs-kind Indicates a PR lacks a `kind/foo` label and requires one. and removed kind/bug Categorizes issue or PR as related to a bug. labels Jul 5, 2023
@cvallesi-kainos
Copy link
Author

Sure:

USER-SUPPLIED VALUES:
controller:
  autoscaling:
    enabled: true

@github-actions
Copy link

github-actions bot commented Aug 6, 2023

This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach #ingress-nginx-dev on Kubernetes Slack.

@github-actions github-actions bot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Aug 6, 2023
@philipp-durrer-jarowa
Copy link

philipp-durrer-jarowa commented Nov 16, 2023

We have the same problem with our nginx-ingress deployment on AKS (however we use Standard_B2ms machines). I wonder if the autoscaling feature necessarily needs resource limits to be set so it can evaluate what exactly 50% or 80% of cpu/memory usage is.

values.yaml:

controller:
  service:
    externalTrafficPolicy: Local
    annotations:
      service.beta.kubernetes.io/azure-load-balancer-health-probe-request-path: "/healthz"
  extraArgs:
    enable-ssl-passthrough: "" # Needed for Coturn SSL forwarding
  allowSnippetAnnotations: true # Needed for Jitsi Web /config.js block
  autoscaling:
    enabled: true
    minReplicas: 1
    maxReplicas: 4
    targetCPUUtilizationPercentage: 80
    targetMemoryUtilizationPercentage: 80

@tomaustin700
Copy link

Has this been fixed? I deployed nginx via helm a few days ago with this config:

  set {
    name  = "controller.autoscaling.minReplicas"
    value = "1"
  }

  set {
    name  = "controller.autoscaling.maxReplicas"
    value = "2"
  }

And only one pod was created. In another cluster when I've done this in the past it created two.

@longwuyuan
Copy link
Contributor

/remove-triage needs-information
/kind bug
/triage accepted

@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-kind Indicates a PR lacks a `kind/foo` label and requires one. triage/needs-information Indicates an issue needs more information in order to work on it. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Sep 13, 2024
@longwuyuan
Copy link
Contributor

/remove-lifecycle frozen

@k8s-ci-robot k8s-ci-robot removed the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Sep 13, 2024
@grzegorzgniadek
Copy link

Hi, controller in idle state use ~60/120Mi of memory when you got

Limits: 
  cpu: 100m 
  memory:  192Mi 
Requests: 
  cpu:      100m 
  memory:   128Mi 

and enabled autoscaling with default 50% targetMemoryUtilizationPercentage hpa will always scale up pods to maxReplicas

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. needs-priority triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Development

No branches or pull requests

6 participants