-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Terminate pod on container completion #3582
Comments
The discussion uses the title |
Since we have converged on #3759 I will close this |
From @davidhadas in https://kubernetes.slack.com/archives/C0BP8PW9G/p1676531990732709
Re-opening to collect more use cases to understand if this feature worth pursuing. Originally the thought was that the scenarios beyond sidecars are limited. |
Scenario clarification from @davidhadas: the Pod that needs to be terminated is marked as |
Clarifications - copied from slack for history here:
|
To be precise the need identified (as was quoted above) will be met if we support deletion of the pod (which may be followed by later RelicaSet, for example, starting a new pod wherever) if a specific container exists - i.e. it calls for some marking on the container to indicate that if this container exists, it should result in the deletion of the pod (as an example by supporting restartPolict=Never, but other markings can also be used). It is best that if we implement this feature, the decision of whether to restart a pod on container exit will be per container such that users will be able to decide which container should result in pod termination and which should result in container restart. |
Is this the same as #3676 ? aka "keystone" containers |
I think the semantic of re-running Init containers and killing the entire Pod is different. Both has scenarios associated with them. But Terminate Pod is more "destructive" "keystone" than what asked in #3676. #3676 is more about dependencies between Containers the way I read it. It can be implemented via Terminate Pod though. |
Ok. Fair. The "depth" of restart is different. Is there any reason to have
both? Why not just say this keystone feature goes all the way down?
Destroys volumes, kills containers, restarts everything from the ground up.
…On Fri, Feb 17, 2023, 12:46 AM Sergey Kanzhelev ***@***.***> wrote:
Is this the same as #3676
<#3676> ? aka "keystone"
containers
I think the semantic of re-running Init containers and killing the entire
Pod is different. Both has scenarios associated with them. But Terminate
Pod is more "destructive" "keystone" than what asked in #3676
<#3676>. #3676
<#3676> is more about
dependencies between Containers the way I read it. It can be implemented
via Terminate Pod though.
—
Reply to this email directly, view it on GitHub
<#3582 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABKWAVCIGAYMJSHWXULXR4TWX43HNANCNFSM6AAAAAAQ5I47T4>
.
You are receiving this because you modified the open/close state.Message
ID: ***@***.***>
|
Use case: kubernetes cronjobs/jobs where a service mesh is involved. The service mesh sidecar container is designed to run forever. But in the case of a kubernetes cronjob/job the service mesh sidecar should terminate when the main pod completes its' job. istio/istio#11659 |
@ceastman-r7 this is somewhat different - sidecars (which is alpha in 1.27) covers what you need I think. |
@thockin Do you have a link to the sidecars documentation? |
Now that graceful shutdown guarantees pods report their terminal status (via changes to how we report pod phase), I think we’ve established more context around what the node is allowed to do to a pod (indicate via shutdown that the pod is terminal regardless of restart policy). While I would be hesitant to allow pods to indicate they can be deleted, a workload already has the power to “give up, take its toys, and go home” by crash looping, which is not always the most effective mechanism. For restart never containers (init or otherwise) giving up makes a lot of sense if the inputs are fixed and there is an expectation of a time window for retry before giving up. For restart always containers, I would suggest exploring questions like:
No matter what we’d need to improve controller back off when pods fail - I just reopened and froze two issues that such a feature would make significantly more dangerous (which is why this one caught my eye). |
All scenarios I saw thus far were around Jobs (restart policy Never or OnFailure). For terminate policy Always some of scenarios you listed with infinite restart backoff can be implemented via Init Containers. How do you see risk vs. value for the TerminatPod behavior for finite Pods? |
This KEP has valid scenarios, but nobody to work on it. Likely not for 1.28 release |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
Enhancement Description
Allow to configure container to terminate the pod on container completion when no restart should be performed as per restartPolicy.
/sig node
k/enhancements
) update PR(s):k/k
) update PR(s):k/website
) update PR(s):The text was updated successfully, but these errors were encountered: