-
Notifications
You must be signed in to change notification settings - Fork 17.9k
openai[patch]: allow specification of output format for Responses API #31686
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎ 1 Skipped Deployment
|
CodSpeed WallTime Performance ReportMerging #31686 will not alter performanceComparing
|
@@ -305,6 +305,20 @@ class BaseChatModel(BaseLanguageModel[BaseMessage], ABC): | |||
- If False (default), will always use streaming case if available. | |||
""" | |||
|
|||
output_version: str = "v0" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we remove this until there's more than one version?
CodSpeed Instrumentation Performance ReportMerging #31686 will not alter performanceComparing Summary
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes a lot of sense - thanks for the clean docs, etc.
A few follow up questions:
- Can we make the default
v1
when we bump version to v1? - I presume
v1
will be using a new form of standard output?
A higher level question: isn't our job to support a uniform output format across all models? In that sense, shouldn't we not support responses
format specifically? That seems quite tied to openai... perhaps this just necessitates the shift towards, for lack of a better word, stdout
😉
Add an
output_version
attribute to BaseChatOpenAI. Motivation is to allow users to opt-in to breaking changes in AIMessage formats."v0"
is the default and corresponds to current format. We intend to introduce standard types for reasoning, citations, and other AIMessage content with"v1"
. At that point we will add the attribute to BaseChatModel.Here we implement
"responses/v1"
to allow users to opt-in to the change described in #31587 — this is a breaking change that is necessary to support some features (e.g., remote MCP tool use under zero data retention contexts).