Skip to content

Add base_url support for AI providers (#1)#85

Open
veo3sz01-bot wants to merge 1 commit intorepowise-dev:mainfrom
veo3sz01-bot:main
Open

Add base_url support for AI providers (#1)#85
veo3sz01-bot wants to merge 1 commit intorepowise-dev:mainfrom
veo3sz01-bot:main

Conversation

@veo3sz01-bot
Copy link
Copy Markdown

Adds support for configuring a custom base_url for LLM/embedding providers (OpenAI, Anthropic, Gemini, Ollama, LiteLLM) to enable proxies and OpenAI-compatible endpoints.

What changed

  • Providers now accept/forward base_url (and LiteLLM base_urlapi_base alias), with env var support (e.g., OPENAI_BASE_URL, ANTHROPIC_BASE_URL, etc.)
  • CLI and server provider resolution now pass through base_url from env (and CLI also from repo config)
  • Docs updated and tests added for base URL resolution

* feat: add base url support for providers

Co-authored-by: veo3sz01-bot <271450703+veo3sz01-bot@users.noreply.github.com>

* Update packages/core/src/repowise/core/providers/llm/gemini.py

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* Document provider base_url env vars

Agent-Logs-Url: https://github.com/veo3sz01-bot/repowise/sessions/19d8a471-8cf0-47ec-be83-37c705d7e832

Co-authored-by: veo3sz01-bot <271450703+veo3sz01-bot@users.noreply.github.com>

* Remove server base_url config fallback

Agent-Logs-Url: https://github.com/veo3sz01-bot/repowise/sessions/f1ae2603-6f6d-4530-b7e0-6d6cc811975c

Co-authored-by: veo3sz01-bot <271450703+veo3sz01-bot@users.noreply.github.com>

---------

Co-authored-by: openai-code-agent[bot] <242516109+Codex@users.noreply.github.com>
Co-authored-by: veo3sz01-bot <271450703+veo3sz01-bot@users.noreply.github.com>
Co-authored-by: Indah Saputra <veo3.sz01@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Copilot AI review requested due to automatic review settings April 14, 2026 16:41
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds configurable base_url support across LLM/embedding providers to enable proxies and OpenAI-compatible endpoints.

Changes:

  • Forward base_url into provider constructors (OpenAI, Anthropic, Gemini, Ollama, LiteLLM) with provider-specific env var support.
  • Update CLI and server-side provider resolution to pass through base_url from env/config.
  • Add unit tests for CLI base URL resolution and update user docs for new env vars.

Reviewed changes

Copilot reviewed 11 out of 11 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
tests/unit/cli/test_helpers.py Adds tests verifying base_url resolution from env and repo config.
packages/cli/src/repowise/cli/helpers.py Resolves provider base_url from env/config and forwards it to get_provider().
packages/server/src/repowise/server/provider_config.py Adds server-side base_url env resolution and forwards it when instantiating providers.
packages/server/src/repowise/server/mcp_server/tool_answer.py Extends MCP answer provider auto-resolution to include base_url env vars.
packages/core/src/repowise/core/providers/llm/openai.py Adds OPENAI_BASE_URL env fallback when constructing the OpenAI client.
packages/core/src/repowise/core/providers/llm/anthropic.py Adds ANTHROPIC_BASE_URL env fallback when constructing the Anthropic client.
packages/core/src/repowise/core/providers/llm/gemini.py Adds GEMINI_BASE_URL support via google-genai HttpOptions / Client configuration.
packages/core/src/repowise/core/providers/llm/ollama.py Makes base_url optional and resolves from OLLAMA_BASE_URL env var or default.
packages/core/src/repowise/core/providers/llm/litellm.py Adds base_url alias for api_base and env fallbacks for LiteLLM base URL.
packages/core/src/repowise/core/providers/embedding/openai.py Adds base_url support for the OpenAI embedder client construction.
docs/USER_GUIDE.md Documents provider base URL environment variables and aliases.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +272 to +273
http_opts = genai_types.HttpOptions(base_url=base_url) if base_url else None
self._client = genai.Client(api_key=api_key, http_options=http_opts)
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In stream_chat(), genai_types.HttpOptions(base_url=...) and genai.Client(..., http_options=...) are called without the TypeError fallbacks used in _generate_with_retry(). With older/newer google-genai versions that don’t support base_url in HttpOptions or the http_options kwarg on Client, this will raise at runtime and break chat streaming. Please mirror the defensive logic from _generate_with_retry() (or factor client construction into a shared helper) so stream_chat() degrades gracefully when base_url/http_options aren’t supported.

Suggested change
http_opts = genai_types.HttpOptions(base_url=base_url) if base_url else None
self._client = genai.Client(api_key=api_key, http_options=http_opts)
http_opts = None
if base_url:
try:
http_opts = genai_types.HttpOptions(base_url=base_url)
except TypeError:
http_opts = genai_types.HttpOptions()
try:
self._client = genai.Client(api_key=api_key, http_options=http_opts)
except TypeError:
self._client = genai.Client(api_key=api_key)

Copilot uses AI. Check for mistakes.
Comment thread docs/USER_GUIDE.md
| `OPENAI_BASE_URL` | No | Base URL override for OpenAI-compatible APIs |
| `GEMINI_API_KEY` | If using Gemini | Google Gemini API key |
| `GEMINI_BASE_URL` | No | Base URL override for Gemini-compatible APIs |
| `OLLAMA_BASE_URL` | If using Ollama | Ollama server URL (default: `http://localhost:11434`) |
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The env var table marks OLLAMA_BASE_URL as required “If using Ollama”, but the description states there is a default (http://localhost:11434). If the app works with the default URL, this variable isn’t actually required and the “Required” column should likely be No (or clarify that it’s only required when using a non-default host).

Suggested change
| `OLLAMA_BASE_URL` | If using Ollama | Ollama server URL (default: `http://localhost:11434`) |
| `OLLAMA_BASE_URL` | No | Ollama server URL override (default: `http://localhost:11434`) |

Copilot uses AI. Check for mistakes.
Comment on lines +245 to +252
env_vars = {
"anthropic": ["ANTHROPIC_BASE_URL"],
"openai": ["OPENAI_BASE_URL"],
"gemini": ["GEMINI_BASE_URL"],
"ollama": ["OLLAMA_BASE_URL"],
"litellm": ["LITELLM_BASE_URL", "LITELLM_API_BASE"],
}
for var in env_vars.get(name, []):
Copy link

Copilot AI Apr 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_resolve_base_url() duplicates the same provider→env-var mapping logic that also exists in the server code (packages/server/.../provider_config.py and .../tool_answer.py). This creates a drift risk (new providers/aliases require updating multiple tables). Consider centralizing this mapping/resolution in a shared helper (e.g., in repowise.core.providers), and reusing it from CLI/server to keep behavior consistent.

Suggested change
env_vars = {
"anthropic": ["ANTHROPIC_BASE_URL"],
"openai": ["OPENAI_BASE_URL"],
"gemini": ["GEMINI_BASE_URL"],
"ollama": ["OLLAMA_BASE_URL"],
"litellm": ["LITELLM_BASE_URL", "LITELLM_API_BASE"],
}
for var in env_vars.get(name, []):
normalized_name = name.upper().replace("-", "_")
env_vars = [f"{normalized_name}_BASE_URL"]
if normalized_name == "LITELLM":
env_vars.append("LITELLM_API_BASE")
for var in env_vars:

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants