Add base_url support for AI providers (#1)#85
Add base_url support for AI providers (#1)#85veo3sz01-bot wants to merge 1 commit intorepowise-dev:mainfrom
Conversation
* feat: add base url support for providers Co-authored-by: veo3sz01-bot <271450703+veo3sz01-bot@users.noreply.github.com> * Update packages/core/src/repowise/core/providers/llm/gemini.py Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> * Document provider base_url env vars Agent-Logs-Url: https://github.com/veo3sz01-bot/repowise/sessions/19d8a471-8cf0-47ec-be83-37c705d7e832 Co-authored-by: veo3sz01-bot <271450703+veo3sz01-bot@users.noreply.github.com> * Remove server base_url config fallback Agent-Logs-Url: https://github.com/veo3sz01-bot/repowise/sessions/f1ae2603-6f6d-4530-b7e0-6d6cc811975c Co-authored-by: veo3sz01-bot <271450703+veo3sz01-bot@users.noreply.github.com> --------- Co-authored-by: openai-code-agent[bot] <242516109+Codex@users.noreply.github.com> Co-authored-by: veo3sz01-bot <271450703+veo3sz01-bot@users.noreply.github.com> Co-authored-by: Indah Saputra <veo3.sz01@gmail.com> Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
There was a problem hiding this comment.
Pull request overview
Adds configurable base_url support across LLM/embedding providers to enable proxies and OpenAI-compatible endpoints.
Changes:
- Forward
base_urlinto provider constructors (OpenAI, Anthropic, Gemini, Ollama, LiteLLM) with provider-specific env var support. - Update CLI and server-side provider resolution to pass through
base_urlfrom env/config. - Add unit tests for CLI base URL resolution and update user docs for new env vars.
Reviewed changes
Copilot reviewed 11 out of 11 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/unit/cli/test_helpers.py | Adds tests verifying base_url resolution from env and repo config. |
| packages/cli/src/repowise/cli/helpers.py | Resolves provider base_url from env/config and forwards it to get_provider(). |
| packages/server/src/repowise/server/provider_config.py | Adds server-side base_url env resolution and forwards it when instantiating providers. |
| packages/server/src/repowise/server/mcp_server/tool_answer.py | Extends MCP answer provider auto-resolution to include base_url env vars. |
| packages/core/src/repowise/core/providers/llm/openai.py | Adds OPENAI_BASE_URL env fallback when constructing the OpenAI client. |
| packages/core/src/repowise/core/providers/llm/anthropic.py | Adds ANTHROPIC_BASE_URL env fallback when constructing the Anthropic client. |
| packages/core/src/repowise/core/providers/llm/gemini.py | Adds GEMINI_BASE_URL support via google-genai HttpOptions / Client configuration. |
| packages/core/src/repowise/core/providers/llm/ollama.py | Makes base_url optional and resolves from OLLAMA_BASE_URL env var or default. |
| packages/core/src/repowise/core/providers/llm/litellm.py | Adds base_url alias for api_base and env fallbacks for LiteLLM base URL. |
| packages/core/src/repowise/core/providers/embedding/openai.py | Adds base_url support for the OpenAI embedder client construction. |
| docs/USER_GUIDE.md | Documents provider base URL environment variables and aliases. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| http_opts = genai_types.HttpOptions(base_url=base_url) if base_url else None | ||
| self._client = genai.Client(api_key=api_key, http_options=http_opts) |
There was a problem hiding this comment.
In stream_chat(), genai_types.HttpOptions(base_url=...) and genai.Client(..., http_options=...) are called without the TypeError fallbacks used in _generate_with_retry(). With older/newer google-genai versions that don’t support base_url in HttpOptions or the http_options kwarg on Client, this will raise at runtime and break chat streaming. Please mirror the defensive logic from _generate_with_retry() (or factor client construction into a shared helper) so stream_chat() degrades gracefully when base_url/http_options aren’t supported.
| http_opts = genai_types.HttpOptions(base_url=base_url) if base_url else None | |
| self._client = genai.Client(api_key=api_key, http_options=http_opts) | |
| http_opts = None | |
| if base_url: | |
| try: | |
| http_opts = genai_types.HttpOptions(base_url=base_url) | |
| except TypeError: | |
| http_opts = genai_types.HttpOptions() | |
| try: | |
| self._client = genai.Client(api_key=api_key, http_options=http_opts) | |
| except TypeError: | |
| self._client = genai.Client(api_key=api_key) |
| | `OPENAI_BASE_URL` | No | Base URL override for OpenAI-compatible APIs | | ||
| | `GEMINI_API_KEY` | If using Gemini | Google Gemini API key | | ||
| | `GEMINI_BASE_URL` | No | Base URL override for Gemini-compatible APIs | | ||
| | `OLLAMA_BASE_URL` | If using Ollama | Ollama server URL (default: `http://localhost:11434`) | |
There was a problem hiding this comment.
The env var table marks OLLAMA_BASE_URL as required “If using Ollama”, but the description states there is a default (http://localhost:11434). If the app works with the default URL, this variable isn’t actually required and the “Required” column should likely be No (or clarify that it’s only required when using a non-default host).
| | `OLLAMA_BASE_URL` | If using Ollama | Ollama server URL (default: `http://localhost:11434`) | | |
| | `OLLAMA_BASE_URL` | No | Ollama server URL override (default: `http://localhost:11434`) | |
| env_vars = { | ||
| "anthropic": ["ANTHROPIC_BASE_URL"], | ||
| "openai": ["OPENAI_BASE_URL"], | ||
| "gemini": ["GEMINI_BASE_URL"], | ||
| "ollama": ["OLLAMA_BASE_URL"], | ||
| "litellm": ["LITELLM_BASE_URL", "LITELLM_API_BASE"], | ||
| } | ||
| for var in env_vars.get(name, []): |
There was a problem hiding this comment.
_resolve_base_url() duplicates the same provider→env-var mapping logic that also exists in the server code (packages/server/.../provider_config.py and .../tool_answer.py). This creates a drift risk (new providers/aliases require updating multiple tables). Consider centralizing this mapping/resolution in a shared helper (e.g., in repowise.core.providers), and reusing it from CLI/server to keep behavior consistent.
| env_vars = { | |
| "anthropic": ["ANTHROPIC_BASE_URL"], | |
| "openai": ["OPENAI_BASE_URL"], | |
| "gemini": ["GEMINI_BASE_URL"], | |
| "ollama": ["OLLAMA_BASE_URL"], | |
| "litellm": ["LITELLM_BASE_URL", "LITELLM_API_BASE"], | |
| } | |
| for var in env_vars.get(name, []): | |
| normalized_name = name.upper().replace("-", "_") | |
| env_vars = [f"{normalized_name}_BASE_URL"] | |
| if normalized_name == "LITELLM": | |
| env_vars.append("LITELLM_API_BASE") | |
| for var in env_vars: |
Adds support for configuring a custom
base_urlfor LLM/embedding providers (OpenAI, Anthropic, Gemini, Ollama, LiteLLM) to enable proxies and OpenAI-compatible endpoints.What changed
base_url(and LiteLLMbase_url→api_basealias), with env var support (e.g.,OPENAI_BASE_URL,ANTHROPIC_BASE_URL, etc.)base_urlfrom env (and CLI also from repo config)