Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using Nginx Helm chart with both public and private load balancer enables access to a private ingress via public ip. #6071

Closed
stylianosrigas opened this issue Aug 25, 2020 · 18 comments
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@stylianosrigas
Copy link
Contributor

NGINX Ingress controller version:

NGINX Ingress controller
  Release:       v0.34.1

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.4", GitCommit:"c96aede7b5205121079932896c4ad89bb93260af", GitTreeState:"clean", BuildDate:"2020-06-18T02:59:13Z", GoVersion:"go1.14.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.6", GitCommit:"d32e40e20d167e103faf894261614c5b45c44198", GitTreeState:"clean", BuildDate:"2020-05-20T13:08:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: AWS
  • Install tools: helm

What happened:

When enabling the additional internal load balancer feature both private and public load balancers are created as expected. Everything works normally when trying to access the public service via public Route53 and the private service via private Route53 DNS. But if we do a dig of the public load balancer and get the LB IPs some of these IPs redirect to the private service of a private deployment that is using the Nginx controller.

What you expected to happen:

Trying to access the public IPs should not redirect to the private service.

How to reproduce it:

  • Deploy NGINX controller using Helm chart and additional internal load balancer feature.
  • Create a deployment that is accessible via the private load balancer service
  • Dig the public load balancer and get the assigned IPs.
  • Try to access the private deployment via these public IPs.

/kind bug

@stylianosrigas stylianosrigas added the kind/bug Categorizes issue or PR as related to a bug. label Aug 25, 2020
@aledbf
Copy link
Member

aledbf commented Aug 25, 2020

But if we do a dig of the public load balancer and get the LB IPs some of these IPs redirect to the private service of a private deployment that is using the Nginx controller.

What do you mean? when you enable the additional internal load balancer you are using the same ingress-nginx deployment, which by definition uses/shares/exposes the same ingress definition. There is no "private" deployment.

@cpanato
Copy link
Member

cpanato commented Aug 25, 2020

@stylianosrigas
Copy link
Contributor Author

@aledbf Thanks for your reply, when I say private deployment I mean a custom deployment that needs to be private and accessible via the private load balancer service. I know that the same ingress-nginx deployment is used in the background for both public and private services but there should be a way to not allow public access to private components.

@aledbf
Copy link
Member

aledbf commented Aug 25, 2020

when I say private deployment I mean a custom deployment that needs to be private and accessible via the private load balancer service.

ok, for that you cannot use the additional load balancer because that "only" creates an additional internal load balancer.

Instead, you need to add an annotation to the controller service service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0 so the load balancer is not public.

Edit: please check the description of the PR that added the feature #5717

@stylianosrigas
Copy link
Contributor Author

Thanks @aledbf. I have been following the description of the PR you mentioned and I am adding the

service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0

annotation. As I mentioned above everything works as expected when I try to access my application via DNS because I specify in Route53 the private load balancer. The problem arises when you dig the public load balancer, get back the IPs of the public load balancer and try to access the IPs directly. NGINX controller based on what it has as default target it may hit the private deployment. Therefore, this makes useless the whole point of having a private service.

@aledbf
Copy link
Member

aledbf commented Aug 25, 2020

Therefore, this makes useless the whole point of having a private service.

I don't understand what are you trying to do or achieve.

private deployment

What does that mean for you?

Can you provide an example of what do you want to do exactly?

@stylianosrigas
Copy link
Contributor Author

Yeap let me explain in more detail.

We have a cluster that has multiple applications deployed some of which need to be publicly accessible and some privately accessible. We are using the helm chart for ingress-nginx and the additional-internal-load-balancer feature, which therefore creates two LoadBalancer services:

nginx-ingress-nginx-controller-internal
nginx-ingress-nginx-controller

For the publicly accessible applications we use a public R53 hosted zone, we create a record and assign the load balancer of nginx-ingress-nginx-controller. For the privately accessible applications we use a private R53 hosted zone and assign the load balancer of nginx-ingress-nginx-controller-internal.
Therefore when we access public.example.com and private.example.com they both work and use the correct load balancer services.

However, if I do a dig of the public load balancer for example dig 12345678-12345678.elb.us-east-1.amazonaws.com and then get the answer IPs and try to access them directly, I can potentially get back the private application, which means that someone with this IP will be able to access the private application, which is a security concern. Does this make sense?

@aledbf
Copy link
Member

aledbf commented Aug 25, 2020

Does this make sense?

Yes, thank you for the explanation.

You could use whitelist-source-range in the ingress of the private application using the VPC network address to restrict access.

Edit: you could also use a different ingress-nginx deployment only for the private applications using an internal LB.

@cpanato
Copy link
Member

cpanato commented Aug 26, 2020

so if this is the case this feature https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx#additional-internal-load-balancer should be revised

@aledbf
Copy link
Member

aledbf commented Aug 26, 2020

so if this is the case this feature https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx#additional-internal-load-balancer should be revised

can you be more specific?

Edit: I am asking about what you think is wrong, clear enough or what you expect

@aledbf
Copy link
Member

aledbf commented Aug 26, 2020

I found another user with a similar use case giantswarm/ingress-nginx-app#90 (comment)

@stafot
Copy link
Contributor

stafot commented Sep 1, 2020

In my understanding, because doesn't make much sense to have a private service be able to be accessible through a public IP security-wise, the documentation should be updated in way that makes clear that if you don't want your private services to be accessed publicly (which should be the default behavior IMHO), you must declare the whitelist-source-range annotation.
Apart from this and correct me if my understanding is not complete or missing some use cases, as ingress-nginx operates as L7 proxy shouldn't have as default routing a 404 when host doesn't match with the used one, so not to be able to do a request via IP at all? I think this is the default behavior on Ingress ALB.

@aledbf
Copy link
Member

aledbf commented Sep 1, 2020

Using Nginx Helm chart with both public and private load balancer enables access to a private ingress via public ip.

Closing. This issue is being used to report things in the wrong way.

enabling the additional internal load balancer only creates a load balancer. If this is not clear, please open a PR to clarify that.

In my understanding, because doesn't make much sense to have a private service be able to be accessible through a public IP security-wise, the documentation should be updated in way that makes clear that if you don't want your private services to be accessed publicly

This is not correct. Using a cloud balancer to expose ingress-nginx has implications about IP addresses and ports. Any additional configuration can be done using a configmap or annotation. ingress-nginx perse is not aware of the use-case and restrictions that should be applied.

ping @lgg42 (creator of the PR)

@aledbf aledbf closed this as completed Sep 1, 2020
@aledbf
Copy link
Member

aledbf commented Sep 1, 2020

@stafot please check the description of the PR that added the feature #5717

@stafot
Copy link
Contributor

stafot commented Sep 1, 2020

@aledbf Thanks for the clarification

@lgg42
Copy link
Contributor

lgg42 commented Oct 24, 2020

Using Nginx Helm chart with both public and private load balancer enables access to a private ingress via public ip.

Closing. This issue is being used to report things in the wrong way.

enabling the additional internal load balancer only creates a load balancer. If this is not clear, please open a PR to clarify that.

In my understanding, because doesn't make much sense to have a private service be able to be accessible through a public IP security-wise, the documentation should be updated in way that makes clear that if you don't want your private services to be accessed publicly

This is not correct. Using a cloud balancer to expose ingress-nginx has implications about IP addresses and ports. Any additional configuration can be done using a configmap or annotation. ingress-nginx perse is not aware of the use-case and restrictions that should be applied.

ping @lgg42 (creator of the PR)

100% agree with you on this comment explaining in detail.

@artemkozlenkov
Copy link

@stylianosrigas could you please elaborate on how did you achieve maping the app ingress object to the additional internal load balancer service endpoint ?

@m00lecule
Copy link
Contributor

@artemkozlenkov You have to enable following helmchart values: https://github.com/kubernetes/ingress-nginx/blob/main/charts/ingress-nginx/templates/controller-service-internal.yaml#L1

helm upgrade ingress-nginx ingress-nginx/ingress-nginx -n nginx --reuse-values --set controller.service.internal.enabled=true --set controller.service.internal.annotations."service\\.beta\\.kubernetes\\.io/aws-load-balancer-internal"=true

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
Development

No branches or pull requests

7 participants