Skip to content

ChatOpenAI: Cannot use GPT-5 verbosity parameter with structured output - ValueError on text/response_format conflict #32492

@d-gangz

Description

@d-gangz

Checked other resources

  • This is a bug, not a usage question. For questions, please use the LangChain Forum (https://forum.langchain.com/).
  • I added a clear and descriptive title that summarizes this issue.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
  • I read what a minimal reproducible example is (https://stackoverflow.com/help/minimal-reproducible-example).
  • I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.

Example Code

from langchain_openai import ChatOpenAI
from langsmith import Client
from dotenv import load_dotenv
from pydantic import BaseModel, Field

Define Pydantic model for structured output

class MovieAnalysis(BaseModel):
"""Structured response for movie story analysis"""
response: str = Field(description="The main response analyzing the movie story")
genre: str = Field(description="The primary genre of the movie (e.g., Fantasy, Action, Drama)")

Load environment variables

load_dotenv()

Initialize LangSmith client

client = Client()

Pull the prompt with model settings included

prompt = client.pull_prompt("gpt5-test")

Create model - this combination should work but currently fails

model = ChatOpenAI(
model="gpt-5",
output_version="responses/v1",
reasoning={"effort": "minimal"},
model_kwargs={"text": {"verbosity": "high"}}, # This causes the conflict
)

Create structured output model

structured_model = model.with_structured_output(MovieAnalysis)

Create chain with structured output

chain = prompt | structured_model

This fails with ValueError

result = chain.invoke({
"story": "A group of unlikely heroes must destroy a powerful ring by throwing it into a volcano while being pursued by evil forces"
})

Error Message and Stack Trace (if applicable)


ValueError Traceback (most recent call last)
Cell In[1], line 53
50 chain = prompt | structured_model
52 # Invoke the prompt
---> 53 result = chain.invoke(
54 {
55 "story": "A group of unlikely heroes must destroy a powerful ring by throwing it into a volcano while being pursued by evil forces"
56 }
57 )
59 print("Structured result:")
60 print(f"Response: {result.response}")

File ~/git-projects/glowing-langsmith/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py:3049, in RunnableSequence.invoke(self, input, config, **kwargs)
3047 input_ = context.run(step.invoke, input_, config, **kwargs)
3048 else:
-> 3049 input_ = context.run(step.invoke, input_, config)
3050 # finish the root run
3051 except BaseException as e:

File ~/git-projects/glowing-langsmith/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py:5441, in RunnableBindingBase.invoke(self, input, config, **kwargs)
5434 @OverRide
5435 def invoke(
5436 self,
(...) 5439 **kwargs: Optional[Any],
5440 ) -> Output:
-> 5441 return self.bound.invoke(
5442 input,
5443 self._merge_configs(config),
5444 **{**self.kwargs, **kwargs},
5445 )

File ~/git-projects/glowing-langsmith/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:383, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
371 @OverRide
372 def invoke(
373 self,
(...) 378 **kwargs: Any,
379 ) -> BaseMessage:
380 config = ensure_config(config)
381 return cast(
382 "ChatGeneration",
--> 383 self.generate_prompt(
384 [self._convert_input(input)],
385 stop=stop,
386 callbacks=config.get("callbacks"),
387 tags=config.get("tags"),
388 metadata=config.get("metadata"),
389 run_name=config.get("run_name"),
390 run_id=config.pop("run_id", None),
391 **kwargs,
392 ).generations[0][0],
393 ).message

File ~/git-projects/glowing-langsmith/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:1006, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
997 @OverRide
998 def generate_prompt(
999 self,
(...) 1003 **kwargs: Any,
1004 ) -> LLMResult:
1005 prompt_messages = [p.to_messages() for p in prompts]
-> 1006 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)

File ~/git-projects/glowing-langsmith/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:825, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
822 for i, m in enumerate(input_messages):
823 try:
824 results.append(
--> 825 self._generate_with_cache(
826 m,
827 stop=stop,
828 run_manager=run_managers[i] if run_managers else None,
829 **kwargs,
830 )
831 )
832 except BaseException as e:
833 if run_managers:

File ~/git-projects/glowing-langsmith/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:1072, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
1070 result = generate_from_stream(iter(chunks))
1071 elif inspect.signature(self._generate).parameters.get("run_manager"):
-> 1072 result = self._generate(
1073 messages, stop=stop, run_manager=run_manager, **kwargs
1074 )
1075 else:
1076 result = self._generate(messages, stop=stop, **kwargs)

File ~/git-projects/glowing-langsmith/.venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:1094, in BaseChatOpenAI._generate(self, messages, stop, run_manager, **kwargs)
1090 stream_iter = self._stream(
1091 messages, stop=stop, run_manager=run_manager, **kwargs
1092 )
1093 return generate_from_stream(stream_iter)
-> 1094 payload = self._get_request_payload(messages, stop=stop, **kwargs)
1095 generation_info = None
1096 if "response_format" in payload:

File ~/git-projects/glowing-langsmith/.venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:2681, in ChatOpenAI.get_request_payload(self, input, stop, **kwargs)
2674 def get_request_payload(
2675 self,
2676 input
: LanguageModelInput,
(...) 2679 **kwargs: Any,
2680 ) -> dict:
-> 2681 payload = super().get_request_payload(input, stop=stop, **kwargs)
2682 # max_tokens was deprecated in favor of max_completion_tokens
2683 # in September 2024 release
2684 if "max_tokens" in payload:

File ~/git-projects/glowing-langsmith/.venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:1170, in BaseChatOpenAI.get_request_payload(self, input, stop, **kwargs)
1168 payload = _construct_responses_api_payload(payload_to_use, payload)
1169 else:
-> 1170 payload = _construct_responses_api_payload(messages, payload)
1171 else:
1172 payload["messages"] = [_convert_message_to_dict(m) for m in messages]

File ~/git-projects/glowing-langsmith/.venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:3404, in _construct_responses_api_payload(messages, payload)
3402 if payload.get("text"):
3403 text = payload["text"]
-> 3404 raise ValueError(
3405 "Can specify at most one of 'response_format' or 'text', received both:"
3406 f"\n{schema=}\n{text=}"
3407 )
3409 # For pydantic + non-streaming case, we use responses.parse.
3410 # Otherwise, we use responses.create.
3411 strict = payload.pop("strict", None)

ValueError: Can specify at most one of 'response_format' or 'text', received both:
schema=<class 'main.MovieAnalysis'>
text={'verbosity': 'high'}

Description

LangChain GPT-5 Verbosity + Structured Output Issue

What is the problem, question, or error?

When trying to use GPT-5's new verbosity parameter via model_kwargs={"text": {"verbosity": "high"}} together with structured output (.with_structured_output()), LangChain raises a ValueError claiming that response_format and text parameters cannot be used together.

However, OpenAI's API does support both features simultaneously via the Responses API's text.format structure. This appears to be a LangChain implementation limitation where the library uses the older Chat Completions API pattern instead of the newer Responses API structure that GPT-5 is designed for.

What I'm trying to do:

  • Use GPT-5's verbosity control ("text": {"verbosity": "high"})
  • Combined with structured output using Pydantic models
  • Both features should work together as they do in the raw OpenAI client

Current behavior:

LangChain throws ValueError: Can specify at most one of 'response_format' or 'text' because it's trying to use incompatible parameter combinations from the Chat Completions API.

Expected behavior:

Should work without error, similar to how the raw OpenAI client handles this via the Responses API structure.

System Info

System Information

OS: Darwin
OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
Python Version: 3.11.6 (main, Nov 2 2023, 04:39:43) [Clang 14.0.3 (clang-1403.0.22.14.1)]

Package Information

langchain_core: 0.3.74
langsmith: 0.4.13
langchain_google_genai: 2.1.1
langchain_openai: 0.3.28

Optional packages not installed

langserve

Other Dependencies

filetype: 1.2.0
google-ai-generativelanguage: 0.6.17
httpx<1,>=0.23.0: Installed. No version info available.
jsonpatch<2.0,>=1.33: Installed. No version info available.
langchain-core<1.0.0,>=0.3.68: Installed. No version info available.
langsmith-pyo3>=0.1.0rc2;: Installed. No version info available.
langsmith>=0.3.45: Installed. No version info available.
openai-agents>=0.0.3;: Installed. No version info available.
openai<2.0.0,>=1.86.0: Installed. No version info available.
opentelemetry-api>=1.30.0;: Installed. No version info available.
opentelemetry-exporter-otlp-proto-http>=1.30.0;: Installed. No version info available.
opentelemetry-sdk>=1.30.0;: Installed. No version info available.
orjson>=3.9.14;: Installed. No version info available.
packaging>=23.2: Installed. No version info available.
pydantic: 2.11.7
pydantic<3,>=1: Installed. No version info available.
pydantic>=2.7.4: Installed. No version info available.
pytest>=7.0.0;: Installed. No version info available.
PyYAML>=5.3: Installed. No version info available.
requests-toolbelt>=1.0.0: Installed. No version info available.
requests>=2.0.0: Installed. No version info available.
rich>=13.9.4;: Installed. No version info available.
tenacity!=8.4.0,<10.0.0,>=8.1.0: Installed. No version info available.
tiktoken<1,>=0.7: Installed. No version info available.
typing-extensions>=4.7: Installed. No version info available.
vcrpy>=7.0.0;: Installed. No version info available.
zstandard>=0.23.0: Installed. No version info available.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugRelated to a bug, vulnerability, unexpected error with an existing feature

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions