-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using Nginx Helm chart with both public and private load balancer enables access to a private ingress via public ip. #6071
Comments
What do you mean? when you enable the additional internal load balancer you are using the same ingress-nginx deployment, which by definition uses/shares/exposes the same ingress definition. There is no "private" deployment. |
@aledbf Thanks for your reply, when I say private deployment I mean a custom deployment that needs to be private and accessible via the private load balancer service. I know that the same ingress-nginx deployment is used in the background for both public and private services but there should be a way to not allow public access to private components. |
ok, for that you cannot use the additional load balancer because that "only" creates an additional internal load balancer. Instead, you need to add an annotation to the controller service Edit: please check the description of the PR that added the feature #5717 |
Thanks @aledbf. I have been following the description of the PR you mentioned and I am adding the
annotation. As I mentioned above everything works as expected when I try to access my application via DNS because I specify in Route53 the private load balancer. The problem arises when you dig the public load balancer, get back the IPs of the public load balancer and try to access the IPs directly. NGINX controller based on what it has as default target it may hit the private deployment. Therefore, this makes useless the whole point of having a private service. |
I don't understand what are you trying to do or achieve.
What does that mean for you? Can you provide an example of what do you want to do exactly? |
Yeap let me explain in more detail. We have a cluster that has multiple applications deployed some of which need to be publicly accessible and some privately accessible. We are using the helm chart for ingress-nginx and the additional-internal-load-balancer feature, which therefore creates two LoadBalancer services:
For the publicly accessible applications we use a public R53 hosted zone, we create a record and assign the load balancer of nginx-ingress-nginx-controller. For the privately accessible applications we use a private R53 hosted zone and assign the load balancer of nginx-ingress-nginx-controller-internal. However, if I do a dig of the public load balancer for example |
Yes, thank you for the explanation. You could use whitelist-source-range in the ingress of the private application using the VPC network address to restrict access. Edit: you could also use a different ingress-nginx deployment only for the private applications using an internal LB. |
so if this is the case this feature https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx#additional-internal-load-balancer should be revised |
can you be more specific? Edit: I am asking about what you think is wrong, clear enough or what you expect |
I found another user with a similar use case giantswarm/ingress-nginx-app#90 (comment) |
In my understanding, because doesn't make much sense to have a private service be able to be accessible through a public IP security-wise, the documentation should be updated in way that makes clear that if you don't want your private services to be accessed publicly (which should be the default behavior IMHO), you must declare the |
Closing. This issue is being used to report things in the wrong way. enabling the additional internal load balancer only creates a load balancer. If this is not clear, please open a PR to clarify that.
This is not correct. Using a cloud balancer to expose ingress-nginx has implications about IP addresses and ports. Any additional configuration can be done using a configmap or annotation. ingress-nginx perse is not aware of the use-case and restrictions that should be applied. ping @lgg42 (creator of the PR) |
@aledbf Thanks for the clarification |
100% agree with you on this comment explaining in detail. |
@stylianosrigas could you please elaborate on how did you achieve maping the app ingress object to the additional internal load balancer service endpoint ? |
@artemkozlenkov You have to enable following helmchart values: https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-service-internal.yaml#L1
|
NGINX Ingress controller version:
Kubernetes version (use
kubectl version
):Environment:
What happened:
When enabling the additional internal load balancer feature both private and public load balancers are created as expected. Everything works normally when trying to access the public service via public Route53 and the private service via private Route53 DNS. But if we do a dig of the public load balancer and get the LB IPs some of these IPs redirect to the private service of a private deployment that is using the Nginx controller.
What you expected to happen:
Trying to access the public IPs should not redirect to the private service.
How to reproduce it:
/kind bug
The text was updated successfully, but these errors were encountered: