Skip to content

Accessing reasoning tokens of another llm model in agents sdk #462

Closed as not planned
@atahanozdemirberkeley

Description

@atahanozdemirberkeley

I'm using the agents SDK with a non-OpenAI model that supports reasoning (using a ":thinking" suffix in the model name).

When using these models, I can see that the Generation output includes a count of "reasoning_tokens" in the usage stats:

"usage": {
  "input_tokens": 13545,
  "input_tokens_details": { "cached_tokens": 0 },
  "output_tokens_details": { "reasoning_tokens": 114 },
  "output_tokens": 270,
  "total_tokens": 13815
}

However, I can't find a way to access the actual reasoning content. For OpenAI models, I understand the SDK supports reasoning configuration in ModelSettings to expose reasoning content, but that is only supported for o-series models it seems like.

Is there any way to access the reasoning content from non-OpenAI models through the Agents SDK, or is this currently only supported for OpenAI models?

Metadata

Metadata

Assignees

No one assigned

    Labels

    questionQuestion about using the SDKstale

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions