Skip to content

Function calling fails on “application/json” MIME type with the latest Gemini models #656

Open
@ekiztk

Description

@ekiztk

Describe the bug

It was working a few days ago. When calling the backend planner agent via Runner.run(), the SDK fails with:

  • "Function calling with a response mime type: 'application/json' is unsupported"

This occurs even though the agent is explicitly returning a JSON-serializable Pydantic model (BackendPlannerAgentOutput).

Debug information

  • Agents SDK version: 0.0.14
  • Python version: 3.12.8

Repro steps

  1. Install the Agents SDK and dependencies.

  2. Create a file planner_test.py with the following minimal script:

    import json
    import asyncio
    from agents import Agent, Runner, AsyncOpenAI, OpenAIChatCompletionsModel
    from pydantic import BaseModel
    from enum import Enum
    from typing import List
    
    class FileActionEnum(str, Enum):
        CREATE = "create"
        UPDATE = "update"
        DELETE = "delete"
    
    class PIPManagerActionEnum(str, Enum):
        INSTALL = "install"
        UNINSTALL = "uninstall"
    
    class BackendFileTask(BaseModel):
        action: FileActionEnum
        file_path: str
        dependencies: List[str] = []
        description: str
    
    class BackendPackageTask(BaseModel):
        action: PIPManagerActionEnum
        package_name: str
    
    class BackendPlannerAgentOutput(BaseModel):
        file_tasks: List[BackendFileTask]
        package_tasks: List[BackendPackageTask]
        explanation: str
    
    async def run_backend_planner_agent():
        gemini_client = AsyncOpenAI(
            base_url="https://generativelanguage.googleapis.com/v1beta/openai/",
            api_key="YOUR_API_KEY"
        )
        payload = {
            "backend_stack_content": "dummy stack",
            "user_request": "Do something",
            "backend_project_directory": "."
        }
        agent = Agent(
            name="Planner",
            handoff_description="Test plan",
            instructions="Test prompt",
            model=OpenAIChatCompletionsModel(
                model="gemini-2.5-pro-preview-05-06",
                openai_client=gemini_client,
            ),
            tools=[],
            output_type=BackendPlannerAgentOutput
        )
        result = await Runner.run(agent, json.dumps(payload))
        return result.final_output
    
    if __name__ == "__main__":
        asyncio.run(run_backend_planner_agent())

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions