Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Potential fix for non responsive LLM after summary #329

Conversation

Leidtier
Copy link
Contributor

This wraps openai_client.streaming_call() and openai_client.request_call() to be behind the same lock, blocking any of them being called while a call to either of them is still ongoing.

The fix is potential because:

  • I wasn't able to find the exact reason why calls to openai_client.streaming_call() didn't resturn any result.
  • I simply assumed that there is a possibility that both can be called in parallel.
  • Since this was implemented I didn't get another block in my tests, but because of the randomness of this, I can't be sure.
  • Having more locks has the potential of unwanted deadlocks, but I didn't encounter any.

@art-from-the-machine art-from-the-machine merged commit 4b6b7de into art-from-the-machine:main Aug 4, 2024
@Leidtier Leidtier deleted the potential-fix-for-non-responsive-llm-after-summary branch August 4, 2024 11:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants