-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Octops causes ingress controller to constantly reload which causes dropped websocket connections #21
Comments
Some example logs from our ingress controller when playing on our staging cluster with octops controller enabled. You can see the ingress controller reloads every time a new ingress resource is created:
|
Tks @jordo I will investigate that too. |
Quick update: The restart is not caused by the octops controller. This is actually a behaviour implemented by the NGINX ingress controller which causes a restart of the process when an Ingress resource is created, updated or deleted. See details on https://kubernetes.github.io/ingress-nginx/how-it-works/#when-a-reload-is-required. The same problem is not present when using another ingress controller like HAProxy or Contour. I will keep this issue open while investigating the mentioned alternatives to NGINX Ingress Controller. My goal is to write down a detailed How-To for using another controller together with the Octops Controller. |
We have promoted all of ingress traffic through HAProxy ingress controller (as an alternative to open-source nginx ingress controller), and can confirm we no longer receive any socket disconnections on any active connection that is routed through the same reverse proxy (HAProxy now). |
And as example, we were able to accomplish feature parity of what nginx provided for our use case via octops prefixes below:
|
controller we are using is haproxytech/kubernetes-ingress |
Closing this issue due to the update that supports HAProxy. |
Creation of Ingress objects causes a reload of the nginx controller, which ultimately shuts down all nginx worker processes (https://kubernetes.github.io/ingress-nginx/how-it-works/#when-a-reload-is-required). All existing websocket connections that serviced by that controller will eventually be disconnected after worker_shutdown_timeout expires: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#worker-shutdown-timeout
We discovered this after consistently seeing our existing websocket connections that were proxied through the same ingress controller as octops, disconnect at approximate the same time at approximatlely 4min internals (240s is the default worker-shutdown-timeout). This is also an issue with Http connections that keep a socket open via keep-alive as well.
This is documented in a few places:
kubernetes/ingress-nginx#6731
kubernetes/ingress-nginx#7115
And a good summary write up below:
https://danielfm.me/post/painless-nginx-ingress/#ingress-classes-to-the-rescue
But ultimately a reload of the configuration will cause socket connections to drop eventually. In the first link above, nginx developers expect that
the solution to the problem is the client library handles the reconnect.
which is obviously a problem when dealing with real-time games all running via websocket through the same ingress controller.It should be noted that nginx+ (enterprise paid product) does not have this limitation:
(https://www.nginx.com/faq/how-does-zero-downtime-configuration-testingreload-in-nginx-plus-work/)
https://www.nginx.com/blog/using-nginx-plus-to-reduce-the-frequency-of-configuration-reloads/
The text was updated successfully, but these errors were encountered: