Skip to content

Bug: Unrecognized FinishReason enum value: 12 from Gemini API causes crash in ChatGoogleGenerativeAI #33444

@lamachine

Description

@lamachine

Checked other resources

  • This is a bug, not a usage question.
  • I added a clear and descriptive title that summarizes this issue.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
  • This is not related to the langchain-community package.
  • I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
  • I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.

Example Code

This does require an api key

`import os
import asyncio
from typing import Literal
from dotenv import load_dotenv
from pydantic import BaseModel, Field
from langchain_core.prompts import ChatPromptTemplate
from langchain_google_genai import ChatGoogleGeneraiAI

1. Load the Google API Key

Assumes a .env file with GOOGLE_API_KEY="your-key-here"

load_dotenv()

2. Define the Pydantic schema for structured output

class RouteDecision(BaseModel):
"""Defines the structured output for the Classifier."""
intent: Literal["action", "conversational"] = Field(
description="Classifies the user's query."
)
reasoning: str = Field(description="A brief reasoning for the decision.")

3. Define the prompt and the LLM chain

prompt = ChatPromptTemplate.from_messages(
[
("system", "You are an expert at routing a user's query. You must use the RouteDecision tool to classify the query."),
("human", "{user_query}"),
]
)

llm = ChatGoogleGeneraiAI(model="gemini-1.5-flash", temperature=0)

This is the function that is failing due to an unhandled API response

structured_llm = llm.with_structured_output(RouteDecision)

chain = prompt | structured_llm

4. Define the async function to run the chain

async def main():
try:
# This query has been observed to sometimes trigger the error
query = "What is the weather in San Francisco?"
print(f"Invoking chain with query: '{query}'")
result = await chain.ainvoke({"user_query": query})
print("Success! Result:")
print(result)
except Exception as e:
print("\n--- ERROR ---")
print(f"An error occurred: {type(e).name}")
print(f"Error Details: {e}")
print("This error is triggered when the Gemini API returns a FinishReason enum value of 12, which is not handled by the library.")

5. Run the example

if name == "main":
asyncio.run(main())`

Error Message and Stack Trace (if applicable)

INFO: Uvicorn running on http://0.0.0.0:8003 (Press CTRL+C to quit)
---CLASSIFYING INTENT---
E0000 00:00:1760197830.534464 35032 alts_credentials.cc:93] ALTS creds ignored. Not running on GCP and untrusted ALTS is not enabled.
I:\GitHub\lamachine\Agent_Swarm_with_Personalities.venv\Lib\site-packages\proto\marshal\rules\enums.py:37: UserWarning: Unrecognized FinishReason enum value: 12
warnings.warn(
Error during classification: 'int' object has no attribute 'name'
---ROUTING DECISION---
--- Node: classifier ---
{'route_decision': RouteDecision(intent='conversational', reasoning="Classification failed: 'int' object has no attribute 'name'")}
---REVIEWING STEP---
---CHECKING CRITIQUE---
--- Node: reviewer ---

Description

Describe the bug
When using ChatGoogleGenerativeAI.with_structured_output(), if the Gemini API returns a response with a FinishReason enum value of 12, the library fails to parse it and throws an AttributeError: 'int' object has no attribute 'name'. This appears to be a new, unhandled reason code from the Gemini API.

To Reproduce

  1. Use langchain-google-genai version 0.1.0 or newer.
  2. Create a ChatGoogleGenerativeAI model instance.
  3. Use .with_structured_output() with any Pydantic model.
  4. Invoke the model with a prompt that causes the Gemini API to return this specific finish reason (e.g., a query that might be close to a safety boundary but not a hard violation). The exact prompt to trigger this is inconsistent.

Expected behavior
The library should either handle the new FinishReason gracefully or provide a more informative error message instead of crashing with a low-level AttributeError.

System Info

  • OS: Windows 11
  • Python 3.13.0 (tags/v3.13.0:60403a5, Oct 7 2024, 09:38:07) [MSC v.1941 64 bit (AMD64)] on win32
  • langchain-core==0.3.79
  • langchain-google-genai==2.0.10

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugRelated to a bug, vulnerability, unexpected error with an existing feature

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions