Skip to content

httpx client has very poor performance for concurrent requests compared to aiohttp #1596

Open
@willthayes

Description

@willthayes

Confirm this is an issue with the Python library and not an underlying OpenAI API

  • This is an issue with the Python library

Describe the bug

The API client uses httpx, which has very poor performance when making concurrent requests compared to aiohttp. Open issue for httpx here

This is forcing us to swap out the OpenAI SDK for our own implementation, which is a pain.

I suspect it is the root cause of the difference between node.js and Python demonstrated here

I'm not massively familiar with the development of this SDK, and whether there is a key reason for picking httpx over aiohttp. From my reading it was switched over for V1 in order to create consistency between sync and async clients, but I'm not sure how vital it is to achieve this. However for our high concurrency async use cases this renders the SDK useless.

To Reproduce

To reproduce, run chat completion requests in parallel with 20+ concurrent requests, benchmarking the openai API client against an implementation using aiohttp. Example code can be found in the linked issue in httpx.

Code snippets

No response

OS

Linux/MacOs

Python version

v3.12

Library version

1.12.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions