-
Notifications
You must be signed in to change notification settings - Fork 1
Closed
Labels
llmLLM provider relatedLLM provider related
Description
Parent Epic
Depends On
Summary
Override list_models_remote in OpenAiProvider (and by delegation in CompatibleProvider)
to query OpenAI-compatible /v1/models endpoints. This covers OpenAI, Gemini (OpenAI-compat
endpoint), Grok (xAI), Groq, vLLM, and any other CompatibleProvider instance.
Endpoint
GET {base_url}/v1/models
Authorization: Bearer {api_key}
Response Shape (OpenAI standard)
{
"data": [
{
"id": "gpt-4o",
"object": "model",
"created": 1715367049,
"owned_by": "openai"
}
],
"object": "list"
}Provider-Specific Notes
| Provider | base_url | Notes |
|---|---|---|
| OpenAI | https://api.openai.com |
Standard; filter to gpt-*, o*, text-embedding-* |
| Gemini | https://generativelanguage.googleapis.com/v1beta/openai |
Returns gemini-* models |
| Grok (xAI) | https://api.x.ai |
Returns grok-* models |
| Groq | https://api.groq.com/openai |
Returns all Groq-hosted models |
| vLLM / local | configurable | Returns loaded models only |
Implementation Notes
OpenAiProvider::list_models_remotehits{self.base_url}/v1/models.- Cache slug = sanitized base_url hostname (e.g.
"openai_api_openai_com"). CompatibleProvider::list_models_remotedelegates toself.inner.list_models_remote().- Convert
created(unix timestamp) tochrono::DateTime<Utc>forModelInfo::created_at. - Do not filter by model type for
CompatibleProvider— downstream config can filter.
Acceptance Criteria
-
OpenAiProvider::list_models_remotereturns full model list from/v1/models. -
CompatibleProviderdelegates correctly. - Cache slug is derived from
base_urlto avoid collisions between providers. - Network failure returns
LlmError::Network, does not overwrite valid cache. - Unit tests for slug derivation and response deserialization.
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
llmLLM provider relatedLLM provider related