Description
Checks
- I have updated to the lastest minor and patch version of Strands
- I have checked the documentation and this is not expected behavior
- I have searched ./issues and there are no duplicates of my issue
Strands Version
0.1.7
Python Version
3.13.0
Operating System
macOS
Installation Method
pip
Steps to Reproduce
- Create a script with the following code:
from strands import Agent
from strands.models.openai import OpenAIModel
from strands_tools import calculator
model = OpenAIModel(
client_args={
"api_key": "dummy_key",
"base_url": "http://custom-proxy-endpoint.example.com/api/v1",
},
model_id="us.anthropic.claude-3-7-sonnet-20250219-v1:0",
params={
"max_tokens": 1000,
"temperature": 0.7,
}
)
agent = Agent(model=model, tools=[calculator])
response = agent("What is 2+2")
- Run the script
- The agent attempts to use the calculator tool but fails when processing the response
Expected Behavior
The agent should successfully use the calculator tool and return the result.
Actual Behavior
The script fails with a detailed error trace when the agent tries to use the calculator tool:
I can solve this simple arithmetic problem for you using the calculator tool.
Tool #1: calculator
Traceback (most recent call last):
File "/path/to/site-packages/strands/event_loop/event_loop.py", line 220, in event_loop_cycle
return _handle_tool_execution(
stop_reason,
...<12 lines>...
kwargs,
)
File "/path/to/site-packages/strands/event_loop/event_loop.py", line 428, in _handle_tool_execution
return recurse_event_loop(
model=model,
...<5 lines>...
**kwargs,
)
File "/path/to/site-packages/strands/event_loop/event_loop.py", line 305, in recurse_event_loop
) = event_loop_cycle(**kwargs)
~~~~~~~~~~~~~~~~^^^^^^^^^^
File "/path/to/site-packages/strands/event_loop/event_loop.py", line 190, in event_loop_cycle
raise e
File "/path/to/site-packages/strands/event_loop/event_loop.py", line 148, in event_loop_cycle
stop_reason, message, usage, metrics, kwargs["request_state"] = stream_messages(
~~~~~~~~~~~~~~~^
model,
^^^^^^
...<4 lines>...
**kwargs,
^^^^^^^^^
)
^
File "/path/to/site-packages/strands/event_loop/streaming.py", line 340, in stream_messages
return process_stream(chunks, callback_handler, messages, **kwargs)
File "/path/to/site-packages/strands/event_loop/streaming.py", line 290, in process_stream
for chunk in chunks:
^^^^^^
File "/path/to/site-packages/strands/types/models/model.py", line 115, in converse
for event in response:
^^^^^^^^
File "/path/to/site-packages/strands/models/openai.py", line 89, in stream
response = self.client.chat.completions.create(**request)
File "/path/to/site-packages/openai/_utils/_utils.py", line 287, in wrapper
return func(*args, **kwargs)
File "/path/to/site-packages/openai/resources/chat/completions/completions.py", line 925, in create
return self._post(
~~~~~~~~~~^
"/chat/completions",
^^^^^^^^^^^^^^^^^^^^
...<43 lines>...
stream_cls=Stream[ChatCompletionChunk],
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/path/to/site-packages/openai/_base_client.py", line 1242, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/site-packages/openai/_base_client.py", line 1037, in request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: [{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'SystemMessage', 'role'), 'msg': "Input should be 'system'", 'input': 'tool', 'ctx': {'expected': "'system'"}}, {'type': 'string_type', 'loc': ('body', 'messages', 2, 'SystemMessage', 'content'), 'msg': 'Input should be a valid string', 'input': [{'text': 'Result: 4', 'type': 'text'}]}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'UserMessage', 'role'), 'msg': "Input should be 'user'", 'input': 'tool', 'ctx': {'expected': "'user'"}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'AssistantMessage', 'role'), 'msg': "Input should be 'assistant'", 'input': 'tool', 'ctx': {'expected': "'assistant'"}}, {'type': 'string_type', 'loc': ('body', 'messages', 2, 'ToolMessage', 'content'), 'msg': 'Input should be a valid string', 'input': [{'text': 'Result: 4', 'type': 'text'}]}]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/path/to/sdk-python/baseurl.py", line 19, in
response = agent("What is 2+2")
File "/path/to/site-packages/strands/agent/agent.py", line 358, in call
result = self._run_loop(prompt, kwargs)
File "/path/to/site-packages/strands/agent/agent.py", line 462, in _run_loop
return self._execute_event_loop_cycle(invocation_callback_handler, kwargs)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/site-packages/strands/agent/agent.py", line 490, in _execute_event_loop_cycle
stop_reason, message, metrics, state = event_loop_cycle(
~~~~~~~~~~~~~~~~^
model=model,
^^^^^^^^^^^^
...<9 lines>...
**kwargs,
^^^^^^^^^
)
^
File "/path/to/site-packages/strands/event_loop/event_loop.py", line 258, in event_loop_cycle
raise EventLoopException(e, kwargs["request_state"]) from e
strands.types.exceptions.EventLoopException: [{'type': 'literal_error', 'loc': ('body', 'messages', 2, 'SystemMessage', 'role'), 'msg': "Input should be 'system'", 'input': 'tool', 'ctx': {'expected': "'system'"}}, {'type': 'string_type', 'loc': ('body', 'messages', 2, 'SystemMessage', 'content'), 'msg': 'Input should be a valid string', 'input': [{'text': 'Result: 4', 'type': 'text'}]}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'UserMessage', 'role'), 'msg': "Input should be 'user'", 'input': 'tool', 'ctx': {'expected': "'user'"}}, {'type': 'literal_error', 'loc': ('body', 'messages', 2, 'AssistantMessage', 'role'), 'msg': "Input should be 'assistant'", 'input': 'tool', 'ctx': {'expected': "'assistant'"}}, {'type': 'string_type', 'loc': ('body', 'messages', 2, 'ToolMessage', 'content'), 'msg': 'Input should be a valid string', 'input': [{'text': 'Result: 4', 'type': 'text'}]}]
The error clearly shows that the agent successfully identifies it should use the calculator tool, but fails when trying to process the
tool's response. The key validation errors are:
- The custom endpoint expects specific role values ('system', 'user', 'assistant') but receives 'tool'
- The content format is incorrect - it expects a string but receives a structured object with [{'text': 'Result: 4', 'type': 'text'}]
This indicates a format mismatch between how the Strands SDK formats tool messages and what the custom API endpoint expects.
Additional Context
This appears to be a compatibility issue between:
- The Strands SDK's OpenAI model implementation (which formats messages in a specific way)
- The custom proxy endpoint (which expects a different format)
I'm connecting to a Claude model through an OpenAI-compatible base URL due to customer environment constraints which require all model
invocations to go through their custom endpoint. While using BedrockModel directly would be preferable, this isn't possible in my
environment.
The proxy endpoint appears to be validating the message format strictly according to OpenAI's API schema, but the Strands SDK might be
using extensions or variations of this format, particularly for tool messages.
I initially thought this was related to issue #136, but that issue was about empty choices in the response, while this is about message
format validation.
Possible Solution
• #136 (OpenAIModel Chat Completion Request errors out)
• #185 (Handle empty choices in OpenAI model provider)
Related Issues
No response