Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Track pod state from deployment run #15408

Open
jpedrick-numeus opened this issue Sep 17, 2024 · 3 comments
Open

Track pod state from deployment run #15408

jpedrick-numeus opened this issue Sep 17, 2024 · 3 comments
Labels
enhancement An improvement of an existing feature

Comments

@jpedrick-numeus
Copy link

Describe the current behavior

As a cost overrun prevention measure our kubernetes work pool base job template has active_deadline_seconds set. If a pod is killed from under the job, it stays in the Running state forever and takes up slots in the work queue.

Describe the proposed behavior

I think it would make sense to track the pod state for kubenetes work pools, so that the jobs know to either start a new pod and re-run or report 'Failed' with a reason.

Likewise, it would be good if some job metadata (such as the pod name, resource requests, etc) were visible from the prefect UI.

Example Use

They can add a simple layer of protection for cost-overruns by setting active_deadline_seconds.

Additional context

No response

@jpedrick-numeus jpedrick-numeus added the enhancement An improvement of an existing feature label Sep 17, 2024
@desertaxle
Copy link
Member

Thanks for the enhancement request @jpedrick-numeus! One idea we've had in this area is to have pods send heartbeats back to the Prefect server so that if the heartbeats stop, the server knows if a pod went down. In this case, we'd probably mark the flow run as CRASHED since the underlying infrastructure caused the failure. Does that sounds like it would work for your use case?

Also, where would you expect to see Kubernetes information for a flow run in the Prefect UI?

@jpedrick-numeus
Copy link
Author

@desertaxle that would work for me. In my case I only need the pod state to be tracked so that prefect knows to move on to the next job in the Work Queue.

I think the details tab under https:///flow-runs/flow-run/?tab=Details would be perfect.

@jameswu1991
Copy link

jameswu1991 commented Nov 8, 2024

just to help tie dispersed communication together, i believe these threads are the same issue:

it seems that using heartbeats to detect crashed pids is an old idea, but unfortunately it caused unhappy memories back in v1. yet, according to the replies in #7239, many members of the community (myself included) would love to see it come back (of course, as an optional non-default feature).

the release of prefect v3 brought back an implementation of heartbeats as a status on the worker (for worker pools only). i would speculate it's not too hard to write a loop service that periodically checks for crashed workers and places their deployments also into crashed state, but the specifics are beyond my expertise.

the ability to distinguish a healthy worker vs a worker that crashed but came back before the heartbeat threshold was reached, may also not be easy to implement. perhaps one could use the unique worker id (auto-generated unique k8s pod name for example) as an incarnation distinguisher.

the other way to (partially) mitigate this issue would be to propagate SIGINT from the parent process to the child process, thereby giving the child process some time to react and gracefully shut down, or at least report back to prefect api that it has crashed. this behavior was first noticed a while ago but it seems to not have been fixed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement An improvement of an existing feature
Projects
None yet
Development

No branches or pull requests

3 participants