Skip to content

openai[patch]: allow specification of output format for Responses API #31686

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

ccurme
Copy link
Collaborator

@ccurme ccurme commented Jun 20, 2025

Add an output_version attribute to BaseChatOpenAI. Motivation is to allow users to opt-in to breaking changes in AIMessage formats.

"v0" is the default and corresponds to current format. We intend to introduce standard types for reasoning, citations, and other AIMessage content with "v1". At that point we will add the attribute to BaseChatModel.

Here we implement "responses/v1" to allow users to opt-in to the change described in #31587 — this is a breaking change that is necessary to support some features (e.g., remote MCP tool use under zero data retention contexts).

Copy link

vercel bot commented Jun 20, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Skipped Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Visit Preview Jun 20, 2025 5:47pm

@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. langchain Related to the langchain package labels Jun 20, 2025
@ccurme ccurme requested review from eyurtsev and sydney-runkle June 20, 2025 16:30
Copy link

codspeed-hq bot commented Jun 20, 2025

CodSpeed WallTime Performance Report

Merging #31686 will not alter performance

Comparing cc/output_version (120d891) with master (7ff4050)

⚠️ Unknown Walltime execution environment detected

Using the Walltime instrument on standard Hosted Runners will lead to inconsistent data.

For the most accurate results, we recommend using CodSpeed Macro Runners: bare-metal machines fine-tuned for performance measurement consistency.

Summary

✅ 13 untouched benchmarks

@@ -305,6 +305,20 @@ class BaseChatModel(BaseLanguageModel[BaseMessage], ABC):
- If False (default), will always use streaming case if available.
"""

output_version: str = "v0"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we remove this until there's more than one version?

Copy link

codspeed-hq bot commented Jun 20, 2025

CodSpeed Instrumentation Performance Report

Merging #31686 will not alter performance

Comparing cc/output_version (120d891) with master (7ff4050)

Summary

✅ 13 untouched benchmarks

@ccurme ccurme changed the title core, openai[patch]: add output_version to BaseChatModel openai[patch]: allow specification of output format for Responses API Jun 20, 2025
@ccurme ccurme removed the langchain Related to the langchain package label Jun 20, 2025
Copy link
Collaborator

@sydney-runkle sydney-runkle left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes a lot of sense - thanks for the clean docs, etc.

A few follow up questions:

  • Can we make the default v1 when we bump version to v1?
  • I presume v1 will be using a new form of standard output?

A higher level question: isn't our job to support a uniform output format across all models? In that sense, shouldn't we not support responses format specifically? That seems quite tied to openai... perhaps this just necessitates the shift towards, for lack of a better word, stdout 😉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
size:L This PR changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants