-
Notifications
You must be signed in to change notification settings - Fork 8.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Zero downtime upgrade #6928
Comments
I was actually wondering about the same thing, but when looking at the command which gets executed ("wait-shutdown") and looking at the corresponding code, I noticed that still a SIGTERM signal gets sent. The linked blog post says that a SIQQUIT signal needs to be used in order for a graceful shutdown. I wonder if changing the signal sent in the wait-shutdown command would already lead to zero downtime deployments? |
@thirdeyenick the SIGTERM is for the nginx controller process not nginx itself. Nginx controller issues SIGQUIT to the nginx process.
|
Thank you very much for the information. This means that zero downtime deployments are already implemented, right? |
I'd like to see a confirmation from the team, but according to the linked sources it should already be implemented i think... Maybe a pull request to remove it from the readme doc if that's the case |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Does anyone know the answers to @Baklap4's questions, would be great to update the readme if this is the case. Especially useful for everyone deciding on running this controller in production environments. |
@longwuyuan I'm not sure this answers the question. According to the readme there is a possibility for disruption when doing an upgrade. However this might not be the case and the doc just needs to be updated. |
There may be other factors at play. Say for example, you are exposing ingress-nginx out of the cluster via svc.type=loadbalancer. If also using metallb, the migration of the vip can cause connection issues. |
There are fewer eyes here than on slack. So I suggest close this issue and come discuss this at kubernetes.slack.com in the ingress-nginx-users channel. There are seasoned developers and users there. This is a vast complicated topic that can not be easily triaged on github. |
Why would this be closed? The question is if the current documentation is up to date or not, because it looks like the documentation on zero downtime upgrades is out of date and that the required changes have since been incorporated into the chart. |
I think I listed all reasons that I could think of in the earlier comment. But apologies if that does not seem like a great idea. Just that tracking in one place would help. This is not a trivial matter and if one can fathom the intricacies, then a educated guess is also not out of reach. To begin with, a change in ingress object configuration can result in a need to reload the nginx.conf of the controller pod. Some other more elaborate text is at the link I posted earlier. Keep this issue open if that seems to work well. Just note that there are many more developers and engineers on slack, so higher likelihood of getting comments on this, is on slack. |
If the current docs are not sufficient then perhaps they should be amended with the suggestion to carefully consider other facets of upgrades and to consult with experienced engineers on slack. |
@longwuyuan I have posted on slack, and happy to discuss there. However the readme refers to https://medium.com/codecademy-engineering/kubernetes-nginx-and-zero-downtime-in-production-2c910c6a5ed8 . All the steps already seems incorporated into the main chart already. There might be other factors in play but at least that problem is solved. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
/assign @theunrealgeek |
@strongjz: GitHub didn't allow me to assign the following users: theunrealgeek. Note that only kubernetes members, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/triage accepted |
Some information sharing. By default, when controller received SIGTERM, it will directly issue a SIGQUIT to nginx process without waiting(https://github.com/kubernetes/ingress-nginx/blob/main/internal/ingress/controller/nginx.go#L373). I set --shutdown-grace-period to 15 and the downtime seems to be reduced. |
Yes @heylongdacoder and I think this value should be at least by default at 5s to mitigate common cases. The documentation should indicate that if there are any external load balancer directly connected to the pod, it should be increased to allow external draining before terminating nginx. |
On AWS load balancer, this must be 120s accordly to the support. We do not have issues since we set this value. |
/help |
@iamNoah1: GuidelinesPlease ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
This issue is labeled with You can:
For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/ /remove-triage accepted |
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Doesn't look like this happens though, with this |
If someone is still looking for a simple solution to achieve upgrade without any downtime these parameters are working perfectly for us:
also if you're using injected linkerd proxy you'll also need:
Without these set there was always some minor downtime during upgrade. |
@mosoka it worked!!! I spent hours trying to find a solution to this annoying downtime issue, and yours is the only thing that worked for me. Thank you so much, you made my day 😄 |
Docs (https://github.com/kubernetes/ingress-nginx/tree/master/charts/ingress-nginx#upgrading-with-zero-downtime-in-production) are mentioning that by default during an upgrade there is a service interruption caused by pods being restarted, and another article is linked with explanation how one can fix that.
But when looking at
values.yml
I can see both terminationGracePerios and preStop hook is set correctly in order to drain connections.ingress-nginx/charts/ingress-nginx/values.yaml
Line 237 in a7fb791
ingress-nginx/charts/ingress-nginx/values.yaml
Lines 583 to 587 in a7fb791
Is the documentation a little bit outdated or there is something else that I should set to get near zero downtime upgrades?
The text was updated successfully, but these errors were encountered: