You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is there currently a way to run inference and get the result by polling or a webhook callback?
Motivation
If an inference takes a long time to run lets say 30 min you dont want the connection to drop because you will have to rerun the inference especially on a shaky connection.
Pitch
A webhook callback or polling method can let us run inference as a background task based off a uuid or something the user specifies so that they can come back at a later time for the result. can also help integrations so the inference is not a blocking request.
I've looked at the documentation and saw those handy Hooks, but wouldn't the the initial api request wait until the server returns something? I was thinking of returning something immediately like an id the user can poll for the polling method or for the web hook method return a success message. while still running the background task. I guess something that could potentially work is returning immediately in the predict function and run some thread in the background but that seems hacky to get it to work. having async task is important for my type of tasks
🚀 Feature
Is there currently a way to run inference and get the result by polling or a webhook callback?
Motivation
If an inference takes a long time to run lets say 30 min you dont want the connection to drop because you will have to rerun the inference especially on a shaky connection.
Pitch
A webhook callback or polling method can let us run inference as a background task based off a uuid or something the user specifies so that they can come back at a later time for the result. can also help integrations so the inference is not a blocking request.
Alternatives
Cog from replicate implements a webhook
Additional context
The text was updated successfully, but these errors were encountered: