Skip to content

[Bug]: 'invalid beta flag' for Claude models on VertexAI #18145

@steve-gore-snapdocs

Description

@steve-gore-snapdocs

What happened?

With Roo Code configured to use the LiteLLM provider with claude-sonnet-4-5 provided by VertexAI and prompt caching enabled, we get the error:

litellm.BadRequestError: Vertex_aiException BadRequestError - b'{"type":"error","error":{"type":"invalid_request_error","message":"invalid beta flag"},"request_id":"req_vrtx_xxxxxxx"}'. Received Model Group=claude-sonnet-4-5
Available Model Group Fallbacks=None LiteLLM Retried: 1 times, LiteLLM Max Retries: 2

With prompt caching disabled, the request returns successfully.

I believe this was broken in https://github.com/BerriAI/litellm/pull/17142/changes#diff-3e49c88c9e4541d46e9443e79f9b34c48a9ff0e29768fbbcc417018866c41f1b. There was a fix made for Bedrock (https://github.com/BerriAI/litellm/pull/17301/changes) but not VertexAI. Additionally, some of the logic is different between the Bedrock and Vertex implementation (Vertex does not start with provided headers, etc).

@Sameerlite

Relevant log output

What part of LiteLLM is this about?

Proxy

What LiteLLM version are you on ?

v1.80.8

Twitter / LinkedIn details

No response

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions