Skip to content

[Bug]: Gemini TTS model returning 400 with 'audio' parameters #11250

Closed
@AyrennC

Description

@AyrennC

What happened?

Using gemini-2.5-flash-preview-tts and gemini-2.5-pro-preview-tts through LiteLLM proxy always return 400 when an audio parameter is provided.

Here is the API body to /v1/chat/completions

{
  "model": "gemini/gemini-2.5-flash-preview-tts",
  "messages": [
    {
      "role": "user",
      "content": "You find yourself stranded on an island."
    }
  ],
  "stream": true,
  "temperature": "0.7",
  "frequency_penalty": "0",
  "modalities": [
    "audio"
  ],
  "audio": {
    "voice": "Zephyr",
    "format": "wav"
  }
}

Relevant log output

Failed to generate narration: 400 litellm.UnsupportedParamsError: gemini does not support parameters: ['audio'], for model=gemini-2.5-flash-preview-tts. To drop these, set `litellm.drop_params=True` or for proxy:

`litellm_settings:
 drop_params: true`
. 
 If you want to use these params dynamically send allowed_openai_params=['audio'] in your request.

Are you a ML Ops Team?

No

What LiteLLM version are you on ?

v1.71.1

Twitter / LinkedIn details

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions