-
Notifications
You must be signed in to change notification settings - Fork 39.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unexpected startupProbe behavior. #89995
Comments
/sig cluster-lifecycle |
/remove-sig cluster-lifecycle try asking in #kubernetes-users on k8s slack. |
@neolit123, they are silent in the chat. Who is involved in the development of startup probe? |
i think this topic better fits SIG Node. try asking in their channel on slack |
@neolit123: Those labels are not set on the issue: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I think this is a feature request, not support, so I changed labels. On its face it SEEMS reasonable, but I am not close enough to the details to say. This is definitely sig-node. |
Adding a new pod lifecycle state has been determined to be intractable in the past. Is the proposed change keeping the pod in !Ready until the first success from |
Where do you usually discuss this? Just wondering.
Yes, I want that pod considered not ready until |
@riking how long wait? |
Looking at this issue, I can confirm it, at least superficially - a pod with a perma-failing The question I have is whether we think that is "correct" or not? Initially I thought it was correct, but the more I think about it the less convinced I am. If I have defined a startupProbe and it is not passing, my app should not be ready. Does anyone disagree? |
Also, the app should not be ready in |
Whether a pod is ready while terminating is an entirely different question, IMO. Apps that server network traffic probably need to "drain" and to do that they need to stay "ready" for a while as upstream LBs de-program. Let's not conflate issues, please. |
The fix seems simple - lets see what CI says. |
Sorry, I don't spend much time on slack... you could have checked the release notes in 1.18 ;-) |
/reopen |
@matthyx: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
All merged. |
@matthyx: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
If I have
startupProbe
probe andreadinessProbe
then all good. Previous version of the pod is terminated after startup probe success, but ifreadinessProbe
wasn't specified the pod becomes ready immediately. It would be nice to add new state (for example starting) which would have pods until astartupProbe
is successful. Also, it would mean that container not ready for connection.Thank you for attention!
The text was updated successfully, but these errors were encountered: