-
Notifications
You must be signed in to change notification settings - Fork 8.5k
Description
What happened:
At first, I used the wrong targetPort in the service, created the associated ingress but could not be accessed, so I fixed the targetPort of the service. I think the ingress should reload the service to ensure the correct forwarding port. But the fact is that the ingress won't reload and still accesses the wrong targetPort. I had to rebuild the ingress.yaml to make sure it work properly.
I wonder, is this a normal situation? or ingress cannot supported this feature for some reason
To illustrate this problem in a simple example,
- I created an nginx service, and used the wrong targetPort port:
service.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-demo
name: nginx-demo
namespace: test
spec:
internalTrafficPolicy: Cluster
ports:
- name: service-0
port: 80
protocol: TCP
targetPort: 81 <-------------------- wrong port
selector:
app: nginx-demo
type: ClusterIP
- create ingress for this service, now we can try to access www.test.com but ingress will response 502 Bad Gateway
ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: test
namespace: test
spec:
rules:
- host: www.test.com
http:
paths:
- backend:
service:
name: nginx-demo
port:
number: 80
path: /
pathType: ImplementationSpecific
- modify the targetPort of service and apply kubectl apply -f service.yaml
service.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-demo
name: nginx-demo
namespace: test
spec:
internalTrafficPolicy: Cluster
ports:
- name: service-0
port: 80
protocol: TCP
targetPort: 80 <-------------------- fixed port
selector:
app: nginx-demo
type: ClusterIP
- try curl ingress still response 502 Bad Gateway. But we curl service will get response properly
curl -k www.test.com
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx</center>
</body>
</html>
clusterIP=`kubectl get svc nginx-demo -ntest -o=jsonpath='{.spec.clusterIP}'`
curl -k $clusterIP
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
- to rebuild ingress by kubectl delete -f ingress.yaml && kubectl apply -f ingress.yaml and so far we can get correct response message from ingress
What you expected to happen:
do not need to rebuild ingress, ingress will reload service change by itself
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.):
v1.10.1
Kubernetes version (use kubectl version
):
v1.23.15
-
ingress version:
v1.10.1 -
OS (e.g. from /etc/os-release): Centos 7.6
-
Kernel (e.g.
uname -a
): -
Linux 192-168-0-174 3.10.0-862.14.4.el7.x86_64 SMP Wed Sep 26 15:12:11 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
-
Install tools:
kubeadm -
Basic cluster related info:
kubectl version
- v1.23.15
kubectl get nodes -o wide
- NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
192.168.0.121 Ready 3d21h v1.23.15 192.168.0.121 CentOS Linux 7 (Core) 3.10.0-862.14.4.el7.x86_64 docker://18.9.0
192.168.0.174 Ready etcd,master 4d1h v1.23.15 192.168.0.174 CentOS Linux 7 (Core) 3.10.0-862.14.4.el7.x86_64 docker://18.9.0
192.168.0.185 Ready etcd,master 4d1h v1.23.15 192.168.0.185 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.0
192.168.0.57 Ready etcd,master 4d1h v1.23.15 192.168.0.57 CentOS Linux 7 (Core) 3.10.0-957.el7.x86_64 docker://18.9.0
-
How was the ingress-nginx-controller installed: helm
-
Current State of the controller: running
Metadata
Metadata
Assignees
Labels
Type
Projects
Status