Skip to content

fix(ollama): thread api_base to get_model_info + graceful fallback#21970

Merged
krrishdholakia merged 111 commits intoBerriAI:litellm_oss_staging_02_23_2026from
Chesars:fix/ollama-api-base-model-info
Feb 24, 2026
Merged

fix(ollama): thread api_base to get_model_info + graceful fallback#21970
krrishdholakia merged 111 commits intoBerriAI:litellm_oss_staging_02_23_2026from
Chesars:fix/ollama-api-base-model-info

Conversation

@Chesars
Copy link
Collaborator

@Chesars Chesars commented Feb 24, 2026

Relevant issues

Fixes #21967
Fixes #9602
Fixes #7997
Fixes #10158

Pre-Submission checklist

  • I have Added testing in the tests/litellm/ directory, Adding at least 1 test is a hard requirement
  • My PR passes all unit tests on make test-unit
  • My PR's scope is as isolated as possible, it only solves 1 specific problem
  • I have requested a Greptile review by commenting @greptileai and received a Confidence Score of at least 4/5 before requesting a maintainer review

Type

🐛 Bug Fix

Changes

When users pass api_base to litellm.completion() for Ollama, the LLM call works correctly, but the model info fetch (for context window, function_calling support) ignores the user's api_base and only reads OLLAMA_API_BASE env var or defaults to localhost:11434. This causes confusing error messages in logs and incomplete model metadata.

api_base was never passed through the get_model_info() call chain. OllamaConfig.get_model_info() hardcoded api_base from env var only.

Fix

  1. OllamaConfig.get_model_info() — accepts optional api_base, uses it before env var fallback, returns safe defaults on connection failure instead of raising
  2. litellm/utils.py — threads api_base through _get_model_info_helper(), _cached_get_model_info_helper(), and get_model_info() (optional param, None default — no impact on other providers)
  3. litellm_logging.py — passes api_base from litellm_params through get_model_cost_information() to get_model_info()

Backwards compatibility

  • api_base is an optional parameter with default None across all changed signatures
  • Non-Ollama providers ignore it entirely — they resolve model info from the static JSON cost map
  • @lru_cache behavior unchanged: None is hashable, non-Ollama callers always pass None = same cache key as before
  • Ollama with different api_base values = different cache entries (correct: different servers may have different models)

Tests added

4 unit tests in tests/test_litellm/llms/ollama/test_ollama_model_info.py:

  • Uses provided api_base over env var
  • Falls back to OLLAMA_API_BASE env var when no api_base given
  • Returns defaults gracefully on connection error (no exception)
  • Strips ollama/ and ollama_chat/ prefixes correctly

ta-stripe and others added 30 commits February 20, 2026 10:57
… deployment tagging (BerriAI#21655)

POST /access_group/new and PUT /access_group/{name}/update now accept an
optional model_ids list that targets specific deployments by their unique
model_id, instead of tagging every deployment that shares a model_name.

When model_ids is provided it takes priority over model_names, giving
API callers the same single-deployment precision that the UI already has
via PATCH /model/{model_id}/update.

Backward compatible: model_names continues to work as before.

Closes BerriAI#21544
… custom favicon for the litellm proxy UI.\n\n- Add favicon_url field to UIThemeConfig model\n- Add LITELLM_FAVICON_URL env var support\n- Add /get_favicon endpoint to serve custom favicons\n- Update ThemeContext to dynamically set favicon\n- Add favicon URL input to UI theme settings page\n- Add comprehensive tests\n\nCloses BerriAI#8323 (BerriAI#21653)
In create_file for Bedrock, get_complete_file_url is called twice:
once in the sync handler (generating UUID-1 for api_base) and once
inside transform_create_file_request (generating UUID-2 for the
actual S3 upload). The Bedrock provider correctly writes UUID-2 into
litellm_params["upload_url"], but the sync handler unconditionally
overwrites it with api_base (UUID-1). This causes the returned
file_id to point to a non-existent S3 key.

Fix: only set upload_url to api_base when transform_create_file_request
has not already set it, preserving the Bedrock provider's value.

Closes BerriAI#21546
…nt (BerriAI#21649)

Add vector_size parameter to QdrantSemanticCache and expose it through
the Cache facade as qdrant_semantic_cache_vector_size. This allows users
to use embedding models with dimensions other than the default 1536,
enabling cheaper/stronger models like Stella (1024d), bge-en-icl (4096d),
voyage, cohere, etc.

The parameter defaults to QDRANT_VECTOR_SIZE (env var or 1536) for
backward compatibility. When creating new collections, the configured
vector_size is used instead of the hardcoded constant.

Closes BerriAI#9377
…rriAI#21762)

Clients like OpenCode's @ai-sdk/openai-compatible send budgetTokens
(camelCase) instead of budget_tokens in the thinking parameter, causing
validation errors. Add early normalization in completion().
Adds per-alert-type digest mode that aggregates duplicate alerts
within a configurable time window and emits a single summary message
with count, start/end timestamps.

Configuration via general_settings.alert_type_config:
  alert_type_config:
    llm_requests_hanging:
      digest: true
      digest_interval: 86400

Digest key: (alert_type, request_model, api_base)
Default interval: 24 hours
Window type: fixed interval

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Adds GetBlogPosts class that fetches blog posts from GitHub with a 1-hour
in-process TTL cache, validates the response, and falls back to the bundled
blog_posts_backup.json on any network or validation failure.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…ache duplication

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…out real API calls

Intercepts at httpx transport layer so the full proxy path (auth, routing,
OpenAI SDK, response transformation) is exercised with zero-latency responses.
Activated via `litellm_settings: { network_mock: true }` in proxy config.
Arindam200 and others added 18 commits February 23, 2026 12:10
)

* Add OpenAI Agents SDK tutorial to docs

* Update OpenAI Agents SDK tutorial to use LiteLLM environment variables

* Enhance OpenAI Agents SDK tutorial with built-in LiteLLM extension details and updated configuration steps. Adjust section headings for clarity and improve the flow of information regarding model setup and usage.
fix(tests): make RPM limit test sequential to fix race condition
…section

docs: add performance & reliability section to v1.81.14 release notes
…I#21955)

openai videos models support the features to download variants.
See more details here: https://developers.openai.com/api/docs/guides/video-generation#use-image-references.
Plumb variant (e.g. "thumbnail", "spritesheet") through the full
video content download chain: avideo_content → video_content →
video_content_handler → transform_video_content_request. OpenAI
appends ?variant=<value> to the GET URL; other providers accept
the parameter in their signature but ignore it.
…te PR workflow

Remove the Claude Code-powered duplicate PR detection workflow and revert
the duplicate issue checker back to wow-actions/potential-duplicates with
text similarity matching.
…e_workflows

Revert duplicate issue checker to text-based matching
…BerriAI#21965)

video_remix_handler and async_video_remix_handler were not falling back
to litellm_params.api_key when the api_key parameter was None, causing
Authorization: Bearer None to be sent to the provider. This matches the
pattern already used by async_video_generation_handler.
[Fix] Spend Update Queue Aggregation Never Triggers with Default Presets
…rage_00

[Infra] UI - Unit Testing Coverage: MCP Semantic Filter
@vercel
Copy link

vercel bot commented Feb 24, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
litellm Ready Ready Preview, Comment Feb 24, 2026 2:53am

Request Review

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 24, 2026

Greptile Summary

This PR fixes a bug where api_base passed to litellm.completion() for Ollama was ignored during model info fetching (get_model_info), causing the metadata lookup to always hit OLLAMA_API_BASE env var or localhost:11434 even when the user specified a different server. The fix also adds graceful fallback — returning safe defaults instead of raising when the Ollama server is unreachable.

  • OllamaConfig.get_model_info() now accepts an optional api_base parameter with a fallback chain: provided value → env var → localhost:11434. On connection/HTTP failure, it returns safe defaults instead of raising.
  • litellm/utils.py threads api_base through get_model_info(), _cached_get_model_info_helper(), and _get_model_info_helper() — all with Optional[str] = None defaults for backward compatibility.
  • litellm_logging.py passes api_base from litellm_params into the model cost information lookup.
  • Backward compatibility is maintained: api_base=None default means non-Ollama providers are unaffected, and lru_cache behavior is preserved (None is hashable, same cache key as before for non-Ollama callers).
  • 4 unit tests added covering: provided api_base usage, env var fallback, graceful error handling, and prefix stripping. All tests use mocks.

Confidence Score: 4/5

  • This PR is safe to merge — changes are backward-compatible, well-scoped, and covered by tests.
  • The fix correctly threads api_base through the model info call chain with optional parameters and None defaults, ensuring no impact on non-Ollama providers. The lru_cache behavior is preserved. The graceful fallback is a reasonable improvement over raising on connection errors. One minor concern: debug-level logging on errors may make it hard for users to notice issues like misspelled model names, but this is a style preference rather than a correctness issue.
  • litellm/llms/ollama/completion/transformation.py — the error logging level (debug vs warning) may be too quiet for production troubleshooting.

Important Files Changed

Filename Overview
litellm/llms/ollama/completion/transformation.py Core fix: get_model_info() now accepts optional api_base parameter with fallback chain (param -> env var -> default). Error handling changed from raising to returning safe defaults. Minor concern: debug-level logging may hide real errors like misspelled model names.
litellm/utils.py Threads api_base through get_model_info(), _get_model_info_helper(), and _cached_get_model_info_helper(). All are optional with None default. Removes now-unnecessary OllamaError re-raise. lru_cache behavior preserved correctly (None is hashable, different api_base values create different cache entries).
litellm/litellm_core_utils/litellm_logging.py Passes api_base from litellm_params through get_model_cost_information() to get_model_info(). Clean, minimal changes with proper optional parameter handling.
tests/test_litellm/llms/ollama/test_ollama_model_info.py Four well-structured unit tests covering: provided api_base usage, env var fallback, graceful connection error handling, and prefix stripping. All tests use mocks (no real network calls).

Sequence Diagram

sequenceDiagram
    participant User as litellm.completion()
    participant Logging as litellm_logging.py
    participant Utils as get_model_info()
    participant Helper as _get_model_info_helper()
    participant Ollama as OllamaConfig.get_model_info()
    participant Server as Ollama Server

    User->>Logging: api_base in litellm_params
    Logging->>Utils: get_model_info(model, provider, api_base)
    Utils->>Helper: _get_model_info_helper(model, provider, api_base)
    Helper->>Ollama: get_model_info(model, api_base)
    
    alt api_base provided
        Ollama->>Server: POST api_base/api/show
    else env var OLLAMA_API_BASE set
        Ollama->>Server: POST env_var/api/show
    else default
        Ollama->>Server: POST localhost:11434/api/show
    end

    alt Success
        Server-->>Ollama: model metadata
        Ollama-->>Helper: ModelInfoBase with actual values
    else Connection or HTTP Error
        Server--xOllama: Error
        Ollama-->>Helper: ModelInfoBase with safe defaults
    end

    Helper-->>Utils: ModelInfoBase
    Utils-->>Logging: ModelInfo
Loading

Last reviewed commit: 1067705

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

4 files reviewed, 1 comment

Edit Code Review Agent Settings | Greptile

…fallback

When users pass api_base to litellm.completion() for Ollama, the model
info fetch (context window, function_calling support) was ignoring the
user's api_base and only reading OLLAMA_API_BASE env var or defaulting
to localhost:11434. This caused confusing errors in logs when Ollama
runs on a remote server.

Thread api_base from litellm_params through the get_model_info call
chain so OllamaConfig.get_model_info() uses the correct server. Also
return safe defaults instead of raising when the server is unreachable.

Fixes BerriAI#21967
@Chesars Chesars force-pushed the fix/ollama-api-base-model-info branch from 1067705 to b326c5c Compare February 24, 2026 02:46
@krrishdholakia krrishdholakia changed the base branch from main to litellm_oss_staging_02_23_2026 February 24, 2026 05:00
@krrishdholakia krrishdholakia merged commit 9495f4e into BerriAI:litellm_oss_staging_02_23_2026 Feb 24, 2026
30 checks passed
@Chesars Chesars deleted the fix/ollama-api-base-model-info branch February 24, 2026 10:12
damhau pushed a commit to damhau/litellm that referenced this pull request Feb 26, 2026
…erriAI#21970)

* auth_with_role_name add region_name arg for cross-account sts

* update tests to include case with aws_region_name for _auth_with_aws_role

* Only pass region_name to STS client when aws_region_name is set

* Add optional aws_sts_endpoint to _auth_with_aws_role

* Parametrize ambient-credentials test for no opts, region_name, and aws_sts_endpoint

* consistently passing region and endpoint args into explicit credentials irsa

* fix env var leakage

* fix: bedrock openai-compatible imported-model should also have model arn encoded

* feat: show proxy url in ModelHub (BerriAI#21660)

* fix(bedrock): correct modelInput format for Converse API batch models (BerriAI#21656)

* fix(proxy): add model_ids param to access group endpoints for precise deployment tagging (BerriAI#21655)

POST /access_group/new and PUT /access_group/{name}/update now accept an
optional model_ids list that targets specific deployments by their unique
model_id, instead of tagging every deployment that shares a model_name.

When model_ids is provided it takes priority over model_names, giving
API callers the same single-deployment precision that the UI already has
via PATCH /model/{model_id}/update.

Backward compatible: model_names continues to work as before.

Closes BerriAI#21544

* feat(proxy): add custom favicon support\n\nAdd ability to configure a custom favicon for the litellm proxy UI.\n\n- Add favicon_url field to UIThemeConfig model\n- Add LITELLM_FAVICON_URL env var support\n- Add /get_favicon endpoint to serve custom favicons\n- Update ThemeContext to dynamically set favicon\n- Add favicon URL input to UI theme settings page\n- Add comprehensive tests\n\nCloses BerriAI#8323 (BerriAI#21653)

* fix(bedrock): prevent double UUID in create_file S3 key (BerriAI#21650)

In create_file for Bedrock, get_complete_file_url is called twice:
once in the sync handler (generating UUID-1 for api_base) and once
inside transform_create_file_request (generating UUID-2 for the
actual S3 upload). The Bedrock provider correctly writes UUID-2 into
litellm_params["upload_url"], but the sync handler unconditionally
overwrites it with api_base (UUID-1). This causes the returned
file_id to point to a non-existent S3 key.

Fix: only set upload_url to api_base when transform_create_file_request
has not already set it, preserving the Bedrock provider's value.

Closes BerriAI#21546

* feat(semantic-cache): support configurable vector dimensions for Qdrant (BerriAI#21649)

Add vector_size parameter to QdrantSemanticCache and expose it through
the Cache facade as qdrant_semantic_cache_vector_size. This allows users
to use embedding models with dimensions other than the default 1536,
enabling cheaper/stronger models like Stella (1024d), bge-en-icl (4096d),
voyage, cohere, etc.

The parameter defaults to QDRANT_VECTOR_SIZE (env var or 1536) for
backward compatibility. When creating new collections, the configured
vector_size is used instead of the hardcoded constant.

Closes BerriAI#9377

* fix(utils): normalize camelCase thinking param keys to snake_case (BerriAI#21762)

Clients like OpenCode's @ai-sdk/openai-compatible send budgetTokens
(camelCase) instead of budget_tokens in the thinking parameter, causing
validation errors. Add early normalization in completion().

* feat: add optional digest mode for Slack alert types (BerriAI#21683)

Adds per-alert-type digest mode that aggregates duplicate alerts
within a configurable time window and emits a single summary message
with count, start/end timestamps.

Configuration via general_settings.alert_type_config:
  alert_type_config:
    llm_requests_hanging:
      digest: true
      digest_interval: 86400

Digest key: (alert_type, request_model, api_base)
Default interval: 24 hours
Window type: fixed interval

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* feat: add blog_posts.json and local backup

* feat: add GetBlogPosts utility with GitHub fetch and local fallback

Adds GetBlogPosts class that fetches blog posts from GitHub with a 1-hour
in-process TTL cache, validates the response, and falls back to the bundled
blog_posts_backup.json on any network or validation failure.

* test: add cache reset fixture and LITELLM_LOCAL_BLOG_POSTS test

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add GET /public/litellm_blog_posts endpoint

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: log fallback warning in blog posts endpoint and tighten test

* feat: add disable_show_blog to UISettings

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add useUISettings and useDisableShowBlog hooks

* fix: rename useUISettings to useUISettingsFlags to avoid naming collision

* fix: use existing useUISettings hook in useDisableShowBlog to avoid cache duplication

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add BlogDropdown component with react-query and error/retry state

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* fix: enforce 5-post limit in BlogDropdown and add cap test

* fix: add retry, stable post key, enabled guard in BlogDropdown

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>

* feat: add BlogDropdown to navbar after Docs link

* feat: add network_mock transport for benchmarking proxy overhead without real API calls

Intercepts at httpx transport layer so the full proxy path (auth, routing,
OpenAI SDK, response transformation) is exercised with zero-latency responses.
Activated via `litellm_settings: { network_mock: true }` in proxy config.

* Litellm dev 02 19 2026 p2 (BerriAI#21871)

* feat(ui/): new guardrails monitor 'demo

mock representation of what guardrails monitor looks like

* fix: ui updates

* style(ui/): fix styling

* feat: enable running ai monitor on individual guardrails

* feat: add backend logic for guardrail monitoring

* fix(guardrails/usage_endpoints.py): fix usage dashboard

* fix(budget): fix timezone config lookup and replace hardcoded timezone map with ZoneInfo (BerriAI#21754)

* fix(budget): fix timezone config lookup and replace hardcoded timezone map with ZoneInfo

* fix(budget): update stale docstring on get_budget_reset_time

* fix: add missing return type annotations to iterator protocol methods in streaming_handler (BerriAI#21750)

* fix: add return type annotations to iterator protocol methods in streaming_handler

Add missing return type annotations to __iter__, __aiter__, __next__, and __anext__ methods in CustomStreamWrapper and related classes.

- __iter__(self) -> Iterator["ModelResponseStream"]
- __aiter__(self) -> AsyncIterator["ModelResponseStream"]
- __next__(self) -> "ModelResponseStream"
- __anext__(self) -> "ModelResponseStream"

Also adds AsyncIterator and Iterator to typing imports.

Fixes issue with PLR0915 noqa comments and ensures proper type checking support.
Related to: BerriAI#8304

* fix: add ruff PLR0915 noqa for files with too many statements

* Add gollem Go agent framework cookbook example (BerriAI#21747)

Show how to use gollem, a production Go agent framework, with
LiteLLM proxy for multi-provider LLM access including tool use
and streaming.

* fix: avoid mutating caller-owned dicts in SpendUpdateQueue aggregation (BerriAI#21742)

* fix(vertex_ai): enable context-1m-2025-08-07 beta header (BerriAI#21870)

* server root path regression doc

* fixing syntax

* fix: replace Zapier webhook with Google Form for survey submission (BerriAI#21621)

* Replace Zapier webhook with Google Form for survey submission

* Add back error logging for survey submission debugging

---------

Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* Revert "Merge pull request BerriAI#21140 from BerriAI/litellm_perf_user_api_key_auth"

This reverts commit 0e1db3f, reversing
changes made to 7e2d6f2.

* test_vertex_ai_gemini_2_5_pro_streaming

* UI new build

* fix rendering

* ui new build

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* docs fix

* release note docs

* docs

* adding image

* fix(vertex_ai): enable context-1m-2025-08-07 beta header

The `context-1m-2025-08-07` Anthropic beta header was set to `null` for vertex_ai,
causing it to be filtered out when users set `extra_headers: {anthropic-beta: context-1m-2025-08-07}`.

This prevented using Claude's 1M context window feature via Vertex AI, resulting in
`prompt is too long: 460500 tokens > 200000 maximum` errors.

Fixes BerriAI#21861

---------

Co-authored-by: yuneng-jiang <yuneng.jiang@gmail.com>
Co-authored-by: milan-berri <milan@berri.ai>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>

* Revert "fix(vertex_ai): enable context-1m-2025-08-07 beta header (BerriAI#21870)" (BerriAI#21876)

This reverts commit bce078a.

* docs(ui): add pre-PR checklist to UI contributing guide

Add testing and build verification steps per maintainer feedback
from @yjiang-litellm. Contributors should run their related tests
per-file and ensure npm run build passes before opening PRs.

* Fix entries with fast and us/

* Add tests for fast and us

* Add support for Priority PayGo for vertex ai and gemini

* Add model pricing

* fix: ensure arrival_time is set before calculating queue time

* Fix: Anthropic model wildcard access issue

* Add incident report

* Add ability to see which model cost map is getting used

* Fix name of title

* Readd tpm limit

* State management fixes for CheckBatchCost

* Fix PR review comments

* State management fixes for CheckBatchCost - Address greptile comments

* fix mypy issues:

* Add Noma guardrails v2 based on custom guardrails (BerriAI#21400)

* Fix code qa issues

* Fix mypy issues

* Fix mypy issues

* Fix test_aaamodel_prices_and_context_window_json_is_valid

* fix: update calendly on repo

* fix(tests): use counter-based mock for time.time in prisma self-heal test

The test used a fixed side_effect list for time.time(), but the number
of calls varies by Python version, causing StopIteration on 3.12 and
AssertionError on 3.14. Replace with an infinite counter-based callable
and assert the timestamp was updated rather than checking for an exact
value.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix(tests): use absolute path for model_prices JSON in validation test

The test used a relative path 'litellm/model_prices_and_context_window.json'
which only works when pytest runs from a specific working directory.
Use os.path based on __file__ to resolve the path reliably.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Update tests/test_litellm/test_utils.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix(tests): use os.path instead of Path to avoid NameError

Path is not imported at module level. Use os.path.join which is already
available.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* clean up mock transport: remove streaming, add defensive parsing

* docs: add Google GenAI SDK tutorial (JS & Python) (BerriAI#21885)

* docs: add Google GenAI SDK tutorial for JS and Python

Add tutorial for using Google's official GenAI SDK (@google/genai for JS,
google-genai for Python) with LiteLLM proxy. Covers pass-through and
native router endpoints, streaming, multi-turn chat, and multi-provider
routing via model_group_alias. Also updates pass-through docs to use the
new SDK replacing the deprecated @google/generative-ai.

* fix(docs): correct Python SDK env var name in GenAI tutorial

GOOGLE_GENAI_API_KEY does not exist in the google-genai SDK.
The correct env var is GEMINI_API_KEY (or GOOGLE_API_KEY).
Also note that the Python SDK has no base URL env var.

* fix(docs): replace non-existent GOOGLE_GENAI_BASE_URL env var in interactions.md

The Python google-genai SDK does not read GOOGLE_GENAI_BASE_URL.
Use http_options={"base_url": "..."} in code instead.

* docs: add network mock benchmarking section

* docs: tweak benchmarks wording

* fix: add auth headers and empty latencies guard to benchmark script

* refactor: use method-level import for MockOpenAITransport

* fix: guard print_aggregate against empty latencies

* fix: add INCOMPLETE status to Interactions API enum and test

Google added INCOMPLETE to the Interactions API OpenAPI spec status enum.
Update both the Status3 enum in the SDK types and the test's expected
values to match.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* Guardrail Monitor - measure guardrail reliability in prod  (BerriAI#21944)

* fix: fix log viewer for guardrail monitoring

* feat(ui/): fix rendering logs per guardrail

* fix: fix viewing logs on overview tab of guardrail

* fix: log viewer

* fix: fix naming to align with metric

* docs: add performance & reliability section to v1.81.14 release notes

* fix(tests): make RPM limit test sequential to avoid race condition

Concurrent requests via run_in_executor + asyncio.gather caused a race
condition where more requests slipped through the rate limiter than
expected, leading to flaky test failures (e.g. 3 successes instead of 2
with rpm_limit=2).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* feat: Singapore guardrail policies (PDPA + MAS AI Risk Management) (BerriAI#21948)

* feat: Singapore PDPA PII protection guardrail policy template

Add Singapore Personal Data Protection Act (PDPA) guardrail support:

Regex patterns (patterns.json):
- sg_nric: NRIC/FIN detection ([STFGM] + 7 digits + checksum letter)
- sg_phone: Singapore phone numbers (+65/0065/65 prefix)
- sg_postal_code: 6-digit postal codes (contextual)
- passport_singapore: Passport numbers (E/K + 7 digits, contextual)
- sg_uen: Unique Entity Numbers (3 formats)
- sg_bank_account: Bank account numbers (dash format, contextual)

YAML policy templates (5 sub-guardrails):
- sg_pdpa_personal_identifiers: s.13 Consent
- sg_pdpa_sensitive_data: Advisory Guidelines
- sg_pdpa_do_not_call: Part IX DNC Registry
- sg_pdpa_data_transfer: s.26 overseas transfers
- sg_pdpa_profiling_automated_decisions: Model AI Governance Framework

Policy template entry in policy_templates.json with 9 guardrail definitions
(4 regex-based + 5 YAML conditional keyword matching).

Tests:
- test_sg_patterns.py: regex pattern unit tests
- test_sg_pdpa_guardrails.py: conditional keyword matching tests (100+ cases)

* feat: MAS AI Risk Management Guidelines guardrail policy template

Add Monetary Authority of Singapore (MAS) AI Risk Management Guidelines
guardrail support for financial institutions:

YAML policy templates (5 sub-guardrails):
- sg_mas_fairness_bias: Blocks discriminatory financial AI (credit/loans/insurance by protected attributes)
- sg_mas_transparency_explainability: Blocks opaque/unexplainable AI for consequential financial decisions
- sg_mas_human_oversight: Blocks fully automated financial decisions without human-in-the-loop
- sg_mas_data_governance: Blocks unauthorized sharing/mishandling of financial customer data
- sg_mas_model_security: Blocks adversarial attacks, model poisoning, inversion on financial AI

Policy template entry in policy_templates.json with 5 guardrail definitions.
Aligned with MAS FEAT Principles, Project MindForge, and NIST AI RMF.

Tests:
- test_sg_mas_ai_guardrails.py: conditional keyword matching tests (100+ cases)

* fix: address SG pattern review feedback

- Update NRIC lowercase test for IGNORECASE runtime behavior
- Add keyword context guard to sg_uen pattern to reduce false positives

* docs: clarify MAS AIRM timeline references

- Explicitly mark MAS AIRM as Nov 2025 consultation draft
- Add 2018 qualifier for FEAT principles in MAS policy descriptions
- Update MAS guardrail wording to avoid release-year ambiguity

* chore: commit resolved MAS policy conflicts

* test:

* chore:

* Add OpenAI Agents SDK tutorial with LiteLLM Proxy to docs  (BerriAI#21221)

* Add OpenAI Agents SDK tutorial to docs

* Update OpenAI Agents SDK tutorial to use LiteLLM environment variables

* Enhance OpenAI Agents SDK tutorial with built-in LiteLLM extension details and updated configuration steps. Adjust section headings for clarity and improve the flow of information regarding model setup and usage.

* adjust blog posts to fetch from github first

* feat(videos): add variant parameter to video content download (BerriAI#21955)

openai videos models support the features to download variants.
See more details here: https://developers.openai.com/api/docs/guides/video-generation#use-image-references.
Plumb variant (e.g. "thumbnail", "spritesheet") through the full
video content download chain: avideo_content → video_content →
video_content_handler → transform_video_content_request. OpenAI
appends ?variant=<value> to the GET URL; other providers accept
the parameter in their signature but ignore it.

* fixing path

* adjust blog post path

* Revert duplicate issue checker to text-based matching, remove duplicate PR workflow

Remove the Claude Code-powered duplicate PR detection workflow and revert
the duplicate issue checker back to wow-actions/potential-duplicates with
text similarity matching.

* ui changes

* adding tests

* adjust default aggregation threshold

* fix(videos): pass api_key from litellm_params to video remix handlers (BerriAI#21965)

video_remix_handler and async_video_remix_handler were not falling back
to litellm_params.api_key when the api_key parameter was None, causing
Authorization: Bearer None to be sent to the provider. This matches the
pattern already used by async_video_generation_handler.

* adding testing coverage + fixing flaky tests

* fix(ollama): thread api_base through get_model_info and add graceful fallback

When users pass api_base to litellm.completion() for Ollama, the model
info fetch (context window, function_calling support) was ignoring the
user's api_base and only reading OLLAMA_API_BASE env var or defaulting
to localhost:11434. This caused confusing errors in logs when Ollama
runs on a remote server.

Thread api_base from litellm_params through the get_model_info call
chain so OllamaConfig.get_model_info() uses the correct server. Also
return safe defaults instead of raising when the server is unreachable.

Fixes BerriAI#21967

---------

Co-authored-by: An Tang <ta@stripe.com>
Co-authored-by: janfrederickk <75388864+janfrederickk@users.noreply.github.com>
Co-authored-by: Zhenting Huang <3061613175@qq.com>
Co-authored-by: Darien Kindlund <darien@kindlund.com>
Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-authored-by: yuneng-jiang <yuneng.jiang@gmail.com>
Co-authored-by: Ryan Crabbe <rcrabbe@berkeley.edu>
Co-authored-by: Krish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: LeeJuOh <56071126+LeeJuOh@users.noreply.github.com>
Co-authored-by: Monesh Ram <31161039+WhoisMonesh@users.noreply.github.com>
Co-authored-by: Trevor Prater <trevor.prater@gmail.com>
Co-authored-by: The Mavik <179817126+themavik@users.noreply.github.com>
Co-authored-by: Edwin Isac <33712823+edwiniac@users.noreply.github.com>
Co-authored-by: milan-berri <milan@berri.ai>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
Co-authored-by: Sameer Kankute <sameer@berri.ai>
Co-authored-by: Harshit Jain <harshitjain0562@gmail.com>
Co-authored-by: Harshit Jain <48647625+Harshit28j@users.noreply.github.com>
Co-authored-by: Ephrim Stanley <ephrim.stanley@point72.com>
Co-authored-by: TomAlon <tom@noma.security>
Co-authored-by: Julio Quinteros Pro <jquinter@gmail.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: ryan-crabbe <128659760+ryan-crabbe@users.noreply.github.com>
Co-authored-by: Ron Zhong <ron-zhong@hotmail.com>
Co-authored-by: Arindam Majumder <109217591+Arindam200@users.noreply.github.com>
Co-authored-by: Lei Nie <lenie@quora.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet