Hi!
Do you have plans to add (or officially document) an OpenAI-compatible upstream provider so I can set a custom base URL and use custom model IDs (e.g. a local vLLM endpoint like http://127.0.0.1:8001/v1)?
If this is already supported, what’s the recommended way to configure:
- provider name
{PROVIDER}_API_BASE (and whether an API key can be optional/dummy)
- custom model IDs / model listing
Thanks