Skip to content

Add support for local OpenAI-compatible endpoints #14

Open
patrickvossler18 wants to merge 1 commit intomainfrom
local-openai-support
Open

Add support for local OpenAI-compatible endpoints #14
patrickvossler18 wants to merge 1 commit intomainfrom
local-openai-support

Conversation

@patrickvossler18
Copy link
Collaborator

Enable custom base URLs so users can run inference against local servers that expose the OpenAI-compatible /v1/chat/completions API. This supports data privacy, cost reduction, and experimentation with open-weight models.

  • Add LocalOpenAi sentinel enum and is_local_openai predicate
  • Add base_url, local_model_name, native_structured_output params to LLMApi
  • Add get_client() branch for local models using ChatOpenAI with base_url
  • Generalize structured output checks via _supports_native_structured_output()
  • Update README with local model usage docs

…tudio)

Enable custom base URLs so users can run inference against local servers
that expose the OpenAI-compatible /v1/chat/completions API. This supports
data privacy, cost reduction, and experimentation with open-weight models.

- Add LocalOpenAi sentinel enum and is_local_openai predicate
- Add base_url, local_model_name, native_structured_output params to LLMApi
- Add get_client() branch for local models using ChatOpenAI with base_url
- Generalize structured output checks via _supports_native_structured_output()
- Add 18 unit tests covering enum, client construction, and env var fallbacks
- Update README with local model usage docs
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant