Skip to content

logprobs not returned when using the response api #36211

@HOZHENWAI

Description

@HOZHENWAI

Checked other resources

  • This is a bug, not a usage question.
  • I added a clear and descriptive title that summarizes this issue.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
  • This is not related to the langchain-community package.
  • I posted a self-contained, minimal, reproducible example. A maintainer can copy it and run it AS IS.

Package (Required)

  • langchain
  • langchain-openai
  • langchain-anthropic
  • langchain-classic
  • langchain-core
  • langchain-model-profiles
  • langchain-tests
  • langchain-text-splitters
  • langchain-chroma
  • langchain-deepseek
  • langchain-exa
  • langchain-fireworks
  • langchain-groq
  • langchain-huggingface
  • langchain-mistralai
  • langchain-nomic
  • langchain-ollama
  • langchain-openrouter
  • langchain-perplexity
  • langchain-qdrant
  • langchain-xai
  • Other / not sure / general

Related Issues / PRs

Reproduction Steps / Example Code (Python)

from langchain_openai import ChatOpenAI

client = ChatOpenAI(
    model="gpt-5.2",
    use_responses_api=True,
    logprobs=True
)

response = client.invoke("say hello world")

Error Message and Stack Trace (if applicable)

Traceback (most recent call last):
  File "<input>", line 12, in <module>
  File "C:\Users\ml-na\PycharmProjects\pipeline-llm\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 402, in invoke
    self.generate_prompt(
    ~~~~~~~~~~~~~~~~~~~~^
        [self._convert_input(input)],
        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    ...<6 lines>...
        **kwargs,
        ^^^^^^^^^
    ).generations[0][0],
    ^
  File "C:\Users\ml-na\PycharmProjects\pipeline-llm\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 1123, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
           ~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Users\ml-na\PycharmProjects\pipeline-llm\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 933, in generate
    self._generate_with_cache(
    ~~~~~~~~~~~~~~~~~~~~~~~~~^
        m,
        ^^
    ...<2 lines>...
        **kwargs,
        ^^^^^^^^^
    )
    ^
  File "C:\Users\ml-na\PycharmProjects\pipeline-llm\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py", line 1235, in _generate_with_cache
    result = self._generate(
        messages, stop=stop, run_manager=run_manager, **kwargs
    )
  File "C:\Users\ml-na\PycharmProjects\pipeline-llm\.venv\Lib\site-packages\langchain_openai\chat_models\base.py", line 1496, in _generate
    raise e
  File "C:\Users\ml-na\PycharmProjects\pipeline-llm\.venv\Lib\site-packages\langchain_openai\chat_models\base.py", line 1474, in _generate
    raw_response = self.root_client.responses.with_raw_response.create(
        **payload
    )
  File "C:\Users\ml-na\PycharmProjects\pipeline-llm\.venv\Lib\site-packages\openai\_legacy_response.py", line 367, in wrapped
    return cast(LegacyAPIResponse[R], func(*args, **kwargs))
                                      ~~~~^^^^^^^^^^^^^^^^^
TypeError: Responses.create() got an unexpected keyword argument 'logprobs'

Description

The principal issue is that when if you set use_responses_api=True and use logprobs=True, then any invocation fails.
`from langchain_openai import ChatOpenAI

client = ChatOpenAI(
model="gpt-5.2",
use_responses_api=True,
logprobs=True
)

response = client.invoke("say hello world")`
TypeError: Responses.create() got an unexpected keyword argument 'logprobs'

Looking at the doc, it seems logprobs is considered as a preserved parameter and is passed to responses.create which explain the error.
The supported parameter (using the official openai library) is include=["message.output_text.logprobs"
thus the following code works:
`from langchain_openai import ChatOpenAI

client = ChatOpenAI(
model="gpt-5.2",
use_responses_api=True,
include=["message.output_text.logprobs"]
)

response = client.invoke("say hello world")`
however with the changes in the responses format vs the chat format, the logprobs are not properly parsed resulting in missing logprobs in the response.response_metadata which i expected.

System Info

System Information

OS: Windows
OS Version: 10.0.26200
Python Version: 3.14.3 (tags/v3.14.3:323c59a, Feb 3 2026, 16:04:56) [MSC v.1944 64 bit (AMD64)]

Package Information

langchain_core: 1.2.22
langchain: 1.2.13
langsmith: 0.7.22
langchain_google_genai: 4.2.1
langchain_openai: 1.1.12
langgraph_sdk: 0.3.12

Optional packages not installed

deepagents
deepagents-cli

Other Dependencies

filetype: 1.2.0
google-genai: 1.68.0
httpx: 0.28.1
jsonpatch: 1.33
langgraph: 1.1.3
openai: 2.29.0
opentelemetry-api: 1.40.0
orjson: 3.11.7
packaging: 26.0
pydantic: 2.12.5
pytest: 9.0.2
pyyaml: 6.0.3
requests: 2.32.5
requests-toolbelt: 1.0.0
rich: 14.3.3
tenacity: 9.1.4
tiktoken: 0.12.0
typing-extensions: 4.15.0
uuid-utils: 0.14.1
websockets: 16.0
wrapt: 1.17.3
xxhash: 3.6.0
zstandard: 0.25.0

Metadata

Metadata

Assignees

Labels

bugRelated to a bug, vulnerability, unexpected error with an existing featureexternalopenai`langchain-openai` package issues & PRs

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions