-
Notifications
You must be signed in to change notification settings - Fork 1.2k
Async version in aisuite.AsyncClient class #185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
…rovides implementations.
…stead of being abstract
@ksolo @foxty @jeffxtang @joaomdmoura can you, please, review the PR? |
I synchronized async client version by adding tool and thinking support. @rohitprasad15, review, please, the PR. |
Hey, found a bug! If you try to set providers
the provider is istantiated by Client, thus not async, and then throws error 'cause chat_completions_create_async method doesn't exists. I think that overriding the _initialize_providers method setting is_async=True should be enough. |
@antoniomuzzolini Thank you for the report! I made more general update by introducing BaseClient class which avoids copying and pasting. Check, please, the new version. |
Async LLM calls support proposal
Description
I added aisuite.AsyncClient interface with async version of llm calls and implemented support for a few providers: OpenAI, Anthropic, Mistral, Fireworks. Async tests for supported providers are written.
Demand
Async calls are necessary for industrial development and have huge adoption in python libraries like FastAPI. It is easy and popular way to run llm calls in parallel with asyncio.gather() function.
The PR addresses request #61
Changes Made
Class aisuite.AsyncClient is added. The new method
def chat_completions_create_async(self, model, messages)
is added into Provider class. Default implementation throws NotImplementedError.Checklist