Skip to content

Conversation

@shehab299
Copy link
Collaborator

@shehab299 shehab299 commented Dec 3, 2025

  • Since reasoning effort is not provider agnostic how should we handle this in the interface
  • How should we in general handle inagnostic provider interface


let openai_resp: ChatCompletionResponse = response.json().await?;
let response_text = response.text().await?;
println!("Response body: {}", response_text);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't need print lines right?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry a mistake

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was debugging reasoning models not output content
And I was outputing a response
I figured out that openai don't put the thinking tokens in the response but it does charge you for it

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants