Description
Running Multiple NGINX Ingress Controllers as Daemonset not working because of hostPort setting in Daemonset related resources.
To Reproduce
Steps to reproduce the behavior:
- Setting up first-ingress-controller:
helm upgrade -i -n infra first-controller nginx-stable/nginx-ingress --create-namespace \
--set controller.kind=daemonset \
--set controller.ingressClass=nginx \
--set controller.service.type=NodePort \
--set controller.service.httpPort.nodePort=30080 \
--set controller.service.httpPort.nodePort=30443
- Setting up second-ingress-controller:
helm upgrade -i -n infra second-controller nginx-stable/nginx-ingress --create-namespace \
--set controller.kind=daemonset \
--set controller.ingressClass=wss \
--set controller.service.type=NodePort \
--set controller.service.httpPort.nodePort=31080 \
--set controller.service.httpPort.nodePort=31443 \
--set controller.tolerations[0].operator=Exists
- Watching pods statuses of second-ingress-controller:
# k get pods -n infra -l app=second-controller-nginx-ingress
NAME READY STATUS RESTARTS AGE
second-controller-nginx-ingress-6ngpk 0/1 Pending 0 4m26s
second-controller-nginx-ingress-8nxxg 0/1 Pending 0 4m26s
second-controller-nginx-ingress-982d2 0/1 Pending 0 4m26s
- We are looking for the reason why it's in Pending state:
# k describe pod second-controller-nginx-ingress-6ngpk -n infra
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4m15s default-scheduler 0/8 nodes are available: 1 node(s) didn't have free ports for the requested pod ports, 7 node(s) didn't match Pod's node affinity/selector.
Expected behavior for me
Second-ingress-controller up and running.
My environment
- latest stable helm chart: nginx-ingress-0.10.0 1.12.0
- 1.21
- Self-Hosted k8s
- Using NGINX
Additional context
The behavior we have it's because hostPort setting.
https://kubernetes.io/docs/concepts/configuration/overview/
Don't specify a hostPort for a Pod unless it is absolutely necessary. When you bind a Pod to a hostPort, it limits the number of places the Pod can be scheduled, because each <hostIP, hostPort, protocol> combination must be unique. If you don't specify the hostIP and protocol explicitly, Kubernetes will use 0.0.0.0 as the default hostIP and TCP as the default protocol.
In daemon-set configs we have hostPort setting:
https://github.com/nginxinc/kubernetes-ingress/blob/master/deployments/daemon-set/nginx-ingress.yaml
https://github.com/nginxinc/kubernetes-ingress/blob/master/deployments/daemon-set/nginx-plus-ingress.yaml
https://github.com/nginxinc/kubernetes-ingress/blob/master/deployments/helm-chart/templates/controller-daemonset.yaml
In deployment configs we don't have hostPort setting:
https://github.com/nginxinc/kubernetes-ingress/blob/master/deployments/deployment/nginx-ingress.yaml
https://github.com/nginxinc/kubernetes-ingress/blob/master/deployments/deployment/nginx-plus-ingress.yaml
https://github.com/nginxinc/kubernetes-ingress/blob/master/deployments/helm-chart/templates/controller-deployment.yaml
Why we need to use hostPort
in daemonset configurations?
Can we just delete this setting like in deployment configurations?
Maybe we should make template with ability to remove hostPort
configuration by specifying some conditions?
Metadata
Metadata
Assignees
Type
Projects
Status