Description
Confirm this is a feature request for the Python library and not the underlying OpenAI API.
- This is a feature request for the Python library
Describe the feature or improvement you're requesting
Hi Team,
We're trying to implement a setup:
Machine A (openai-python client) -> Machine B (a proxy) -> some OpenAI Compatible Provider
Machine B acts as a simple HTTP proxy, dynamically forwarding the request (URL, headers, body) it receives from Machine A to the any Open-AI compatible Provider URL (like Groq, TogetherAi, etc.). Machine B doesn't have static knowledge of the provider's URL until Machine A's request arrives. Streaming responses are essential.
The README shows how to configure http_client with a proxy:
client = OpenAI(
base_url="http://my.test.server.example.com:8083/v1",
http_client=DefaultHttpxClient(
proxy="http://my.test.proxy.example.com",
),
)
How can we configure Machine B to achieve this dynamic forwarding pattern, ensuring that all aspects of the OpenAI request (including headers for authentication, method, body, and especially streaming) are correctly passed through and handled? Which docker image could we use in our case as a proxy? We haven't found clear guidance or examples for this specific dynamic forwarding proxy scenario.
As for feature request, it would be incredibly valuable to update your documentation to include a dedicated section on implementing such a dynamic forwarding proxy scenario (A -> B -> Provider). This is a common requirement for flexible deployments, and current openai-python documentation doesn't seem to cover it.
Thanks for any pointers!
Additional context
No response