Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

initContainer to build nginx config? #33

Open
Cryptophobia opened this issue Aug 1, 2018 · 5 comments
Open

initContainer to build nginx config? #33

Cryptophobia opened this issue Aug 1, 2018 · 5 comments

Comments

@Cryptophobia
Copy link
Member

Should we investigate if using initContainer to build the initial nginx config using the Go k8s-client worth the investment in effort? Or even postStart lifecycle hooks? @krancour @kingdonb ?

@krancour
Copy link
Contributor

krancour commented Aug 1, 2018

Can you elaborate on what problem that would solve?

@Cryptophobia
Copy link
Member Author

Cryptophobia commented Aug 1, 2018

@krancour : We encountered a problem last week where building the initial config inside the nginx containers was taking longer than the initialDelaySeconds of the livenessProbe and readinessProbe. I added it to the README of the router as scaling concern. But I wonder if we use initContainer to build the config if that will speed things up.

There are a few scaling concerns when it comes to running many instances of the router. When there are many applications and services deployed, the router needs to query the k8s API for relevant metadata concerning itself and all routable services in order to generate the nginx config. This takes longer as the number of application/services increases and the router pods could timeout on the health checks before the k8s API client populates the RouterConfig object inside the pod. It may be necessary to increase the initialDelaySeconds timeout of livenessProbe and readinessProbe health checks of the deis-router deployment object at some point when reaching that scale.

Maybe it's not worth investing more time in the router if native ingress is the way to go for the future.

@krancour
Copy link
Contributor

krancour commented Aug 2, 2018

I agree native ingress is a better choice. That said, if there's an immediate need to improve the router while remaining backward compatible, you might consider turning it into a proper k8s controller. That's not necessarily easy, to be honest, but it offers a distinct improvement in that you aren't in an infinite loop of looking at all services and determining what the changes were, as the router does today. Instead you're in an infinite loop, subscribed to discrete changes. It's waaaay more efficient. Again... not necessarily easy (I don't think) to write a controller, but it's the canonical approach to this problem. You could very well consider forking the existing Nginx ingress controller and adapting it to watch annotated services instead of ingresses.

@felixbuenemann
Copy link
Contributor

I think the main issue here would be that livenessProbe might be set to low, because readinessProbe will prevent traffic from reaching the router during startup and livenessProbe should detect if it is hung and needs to be restarted.

@Cryptophobia
Copy link
Member Author

It would be good if we could test this at high load and see if the problem is solved when livenessProbe is set higher.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants