Skip to content

Using previous_response_id fails when swapping from reasoning -> non-reasoning models #2364

Closed as not planned
@hayescode

Description

@hayescode

Confirm this is an issue with the Python library and not an underlying OpenAI API

  • This is an issue with the Python library

Describe the bug

previous_response_id works great however I encounter this error when swapping from reasoning -> non-reasoning models, presumably a common use-case.

I can't seem to find a way to list and clean out the reasoning steps in this scenario because reasoning isn't found in client.responses.input_items.list(). The only way I can think of is to manually handle the conversation like in Chat Completions but then that removes all of the benefits of the previous_response_id.

Ideally the backend would be smart enough to handle this and remove reasoning inputs when a non-reasoning model is selected. Alternatively having a function to clear these out on our end would help in the meantime.

I'm using Azure OpenAI.

OpenAI Version: 1.79.0
Azure API Version: 2025-04-01-preview

To Reproduce

response1 = await llm.responses.create(
    input="what is a recursive python function?",
    instructions="formatting re-enabled",
    model="o4-mini",
    reasoning={"effort": "medium", "summary": "detailed"},
)
print(response1)
response2 = await llm.responses.create(
    input="hi",
    previous_response_id=response1.id,
    model="gpt-4.1",
)
print(response2)
---------------------------------------------------------------------------
BadRequestError                           Traceback (most recent call last)
Cell In[44], line 8
      1 response1 = await llm.responses.create(
      2     input="what is a recursive python function?",
      3     instructions="formatting re-enabled",
      4     model="o4-mini",
      5     reasoning={"effort": "medium", "summary": "detailed"},
      6 )
      7 print(response1)
----> 8 response2 = await llm.responses.create(
      9     input="hi",
     10     previous_response_id=response1.id,
     11     model="gpt-4.1",
     12 )
     13 print(response2)

File c:\Users\user\repo\.venv\Lib\site-packages\openai\resources\responses\responses.py:1559, in AsyncResponses.create(self, input, model, include, instructions, max_output_tokens, metadata, parallel_tool_calls, previous_response_id, reasoning, service_tier, store, stream, temperature, text, tool_choice, tools, top_p, truncation, user, extra_headers, extra_query, extra_body, timeout)
   1529 @required_args(["input", "model"], ["input", "model", "stream"])
   1530 async def create(
   1531     self,
   (...)   1557     timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
   1558 ) -> Response | AsyncStream[ResponseStreamEvent]:
-> 1559     return await self._post(
   1560         "/responses",
...
-> 1549         raise self._make_status_error_from_response(err.response) from None
   1551     break
   1553 assert response is not None, "could not resolve response (should never happen)"

BadRequestError: Error code: 400 - {'error': {'message': 'Reasoning input items can only be provided to a reasoning or computer use model. Remove reasoning items from your input and try again.', 'type': 'invalid_request_error', 'param': 'input', 'code': None}}

Code snippets

OS

Windows 11

Python version

Python v3.13.2

Library version

openai v1.79.0

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions