-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FR: Streaming request #19
Comments
I have some code for this I was developing off of v0.6.0! I'll try to bring it up to v0.8.0 and put out a PR. I didn't quite figure out how to make the callbacks work nicely but it might not be too far off. |
Awesome! I look forward to testing it out - it would make the wait so much less painful! My next stop is to build out the self-iterative loops (ala CodeT/Reflexion) and, potentially, parallelize different tasks to speed up time to a ready solution. |
This is a fantastic idea |
It would be excellent to add streaming as an option for OpenAI requests.
Given how long it takes to generate a response, seeing the partial response coming through would be better than a progress bar (and allows users to react/think through the response).
Server-Sent Events are implemented in OpenAI API and their cookbook has examples for the Python SDK SSE
We should be able to achieve the same with HTTP.jl, see Request Streaming.
As always, I’m happy to take a stab at it if there is interest.
The text was updated successfully, but these errors were encountered: