-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ingress-nginx not capturing origin IP with ExternalName service types #11753
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/remove-kind bug
|
Just to state the obvious, nothing stops anyone from using the service --type ExterName in the way you describe. But AFAIK that was not the typical use-case for which that service --type was intended. In the context that while it worked for the bouncing you needed but its not built with components similar to a router or a firewall. There are hardly any docs that would explain the intricate Layer4 & Layer7 happenings in the use-case you explained, when it comes to retention of real-client-information, across the hops involved in this use-case. |
Can you please check if a) the header is being passed to your external service and Normally you tell your service from which IP ranges to accept such information as otherwise it could easily be spoofed by anyone. In NGINX e.g. this is handled via the Real IP module (https://nginx.org/en/docs/http/ngx_http_realip_module.html). There you can define from which IP addresses to accept the real IP. In your case these IP addresses might not be the same as your pod or the node it is running on, depending on your network setup. |
As far as I know, standard Azure load balancers don't support proxy protocol, so I assume this is why.
This is exactly what I'm trying to achieve and I already enabled real-ip in the configuration. The problem is that nginx does not show the real IP for requests going to my services with type |
As far as I know Ingress NGINX does not support PROXY protocol for upstreams, maybe not even NGINX itself does. So this approach can not be implemented anyway.
This is only client-facing. For upstream facing, you need to make sure NGINX is handing them to the upstream. Have you enabled EDIT: Sorry, the latter does not apply to your use case. |
This is good to know. What's strange to me if that ingress-nginx on my public-facing cluster does get the real IP for incoming requests to "regular" services (the ones fronting pods on that same cluster). However, ingress-nginx does not log the real IP for requests going to the Request from internet --> external cluster --> pod on external cluster (this works, nginx logs the real IP) Request from internet --> external cluster (ExternalName service) --> pod on internal cluster (does not work, nginx logs either the K8s node IP or pod default gateway 192.168.1.1) Please note that I'm not referring to any nginx logs on the internal cluster - this is only from the perspective of the external cluster.
Yes, from my original post I added this to the ingress-nginx config on the external and internal clusters:
|
Oh, wait, NOW I got you. I misunderstand you and thought you're talking about Ingress NGINX not handing the source IP information to your internal cluster. So you mean logging inside the Ingress NGINX on your external cluster differs depending on the target upstream, right? That's strange. I'm working on something different atm that needs to be done asap, but I'd like to have a deeper look here later. |
Exactly! I don't even care if the real IP makes it to the backend. I only want to prevent non-whitelisted IPs from making requests to certain ingresses on the external cluster. As a comparison, here is an example log entry for a request going to one of the pods running on that same external cluster. You can see that it logs the real 20.X.X.X origin IP address.
but here is a log entry for a request going to one of the
Thanks for your help. This problem is really baffling me. |
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach |
What happened:
I need to get the real IP of incoming requests, so I have added the following to my ingress-nginx configMap:
Requests going to ingresses bound to services with type
ClusterIP
are showing the real public IP address of the origin, as expected. These are requests that go directly to app pod(s) running on that same cluster. Example log:However, requests going to ingresses bound to services with type
ExternalName
are showing either the K8s node IP or default gateway of the pod (192.168.1.1). I am usingExternalName
services to proxy API requests from the internet to apps running on a non-public-facing K8s cluster. Here are some nginx log examples:The service definition is very basic:
The ingress definitions used for both the
ClusterIP
andExternalName
services are very similar. Here is an example of one that uses theExternalName
serviceThe
ClusterIP
ingresses have basically the same format, except that it doesn't specify a path or use thenginx.ingress.kubernetes.io/upstream-vhost
annotation. I can't identify any other major differences.What you expected to happen:
Incoming requests to ingresses using the
ExternalName
service log the real origin IPNGINX Ingress controller version:
Kubernetes version:
Environment:
Azure AKS (Kubernetes 1.27.9)
Other:
Note that all these requests are working properly and making it to the appropriate backends. However, I need the real IP in order to setup IP whitelisting with the
nginx.ingress.kubernetes.io/whitelist-source-range
annotationAny help would be greatly appreciated. Thank you.
The text was updated successfully, but these errors were encountered: