Skip to content

[BUG] Agent object not JSON serializable when passed to direct tool calls #350

@cagataycali

Description

@cagataycali

Checks

  • I have updated to the lastest minor and patch version of Strands
  • I have checked the documentation and this is not expected behavior
  • I have searched ./issues and there are no duplicates of my issue

Strands Version

source

Python Version

source

Operating System

macOS

Installation Method

git clone

Steps to Reproduce

When passing an Agent object as a parameter to direct tool calls (e.g., agent.tool.use_llm(agent=agent)), the SDK crashes with a TypeError: Object of type Agent is not JSON serializable error.

Problem Description

The issue occurs in the _record_tool_execution method in agent.py at line 547 when trying to serialize tool input parameters using json.dumps(tool['input']). If any of the tool parameters contains an Agent object, the JSON serialization fails.

Error Details

TypeError: Object of type Agent is not JSON serializable

Stack trace location:

File "/Users/cagatay/tools/.venv/lib/python3.13/site-packages/strands/agent/agent.py", line 547, in _record_tool_execution
    {"text": (f"agent.tool.{tool['name']} direct tool call.\nInput parameters: {json.dumps(tool['input'])}\n")}

Reproduction Steps

  1. Create an agent with tools that accept an agent parameter (like use_llm)
  2. Call the tool directly passing the agent object: agent.tool.use_llm(agent=agent, ...)
  3. The SDK attempts to record the tool execution and fails when serializing the input parameters

Minimal Reproduction Code

from strands import Agent
from strands_tools import use_llm

agent = Agent(tools=[use_llm])

# This will fail with JSON serialization error
result = agent.tool.use_llm(
    prompt="Test prompt",
    system_prompt="You are a helper.",
    tools=["calculator"],
    agent=agent  # This Agent object causes the JSON serialization to fail
)

Expected Behavior

Direct tool calls should work without crashing, even when passing complex objects like Agent instances as parameters.

Actual Behavior

result = agent.tool.use_llm(
    prompt="Calculate 2 + 2 and read a file",
    system_prompt="You are a helper.",
    tools=["calculator", "file_read"],  # Only these 2 tools will be available
    agent=agent
)

print(result)
# (.venv) c² ~/tools/ [main] python3 test.py
# I can help you with both tasks. Let me break them down:

# 1. Calculating 2 + 2 is straightforward, I can use the calculator tool for this.
# 2. For reading a file, I'll need more information about which file you want me to read and how you want me to read it.

# Let me first calculate 2 + 2:
# Tool #1: calculator
# ╭──────────────────────────────────────── Calculation Result ─────────────────────────────────────────╮
# │                                                                                                     │
# │  ╭───────────┬─────────────────────╮                                                                │
# │  │ Operation │ Evaluate Expression │                                                                │
# │  │ Input     │ 2 + 2               │                                                                │
# │  │ Result    │ 4                   │                                                                │
# │  ╰───────────┴─────────────────────╯                                                                │
# │                                                                                                     │
# ╰─────────────────────────────────────────────────────────────────────────────────────────────────────╯
# For reading a file, I need more information:
# 1. Which file would you like me to read? Please provide a path.
# 2. How would you like to read it? Options include:
#    - view: Display the entire file content
#    - lines: Show specific line ranges
#    - preview: Quick content preview
#    - search: Find specific text in the file
#    - and other modes

# Please let me know the file path and preferred reading mode, and I'll be happy to help.Traceback (most recent call last):
#   File "/Users/cagatay/tools/test.py", line 6, in <module>
#     result = agent.tool.use_llm(
#         prompt="Calculate 2 + 2 and read a file",
#     ...<2 lines>...
#         agent=agent
#     )
#   File "/Users/cagatay/tools/.venv/lib/python3.13/site-packages/strands/agent/agent.py", line 147, in caller
#     self._agent._record_tool_execution(
#     ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
#         tool_use, tool_result, user_message_override, self._agent.messages
#         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
#     )
#     ^
#   File "/Users/cagatay/tools/.venv/lib/python3.13/site-packages/strands/agent/agent.py", line 547, in _record_tool_execution
#     {"text": (f"agent.tool.{tool['name']} direct tool call.\nInput parameters: {json.dumps(tool['input'])}\n")}
#                                                                                 ~~~~~~~~~~^^^^^^^^^^^^^^^
#   File "/opt/homebrew/Cellar/python@3.13/3.13.5/Frameworks/Python.framework/Versions/3.13/lib/python3.13/json/__init__.py", line 231, in dumps
#     return _default_encoder.encode(obj)
#            ~~~~~~~~~~~~~~~~~~~~~~~^^^^^
#   File "/opt/homebrew/Cellar/python@3.13/3.13.5/Frameworks/Python.framework/Versions/3.13/lib/python3.13/json/encoder.py", line 200, in encode
#     chunks = self.iterencode(o, _one_shot=True)
#   File "/opt/homebrew/Cellar/python@3.13/3.13.5/Frameworks/Python.framework/Versions/3.13/lib/python3.13/json/encoder.py", line 261, in iterencode
#     return _iterencode(o, 0)
#   File "/opt/homebrew/Cellar/python@3.13/3.13.5/Frameworks/Python.framework/Versions/3.13/lib/python3.13/json/encoder.py", line 180, in default
#     raise TypeError(f'Object of type {o.__class__.__name__} '
#                     f'is not JSON serializable')
# TypeError: Object of type Agent is not JSON serializable
# (.venv) c² ~/tools/ [main] 

Additional Context

This bug affects any tool that accepts complex objects as parameters, not just the use_llm tool. The issue is in the core agent recording mechanism rather than specific tools.

Possible Solutions

Option 1: Filter non-serializable objects

Modify the _record_tool_execution method to filter out non-JSON-serializable objects before calling json.dumps():

def _record_tool_execution(self, tool: ToolUse, tool_result: ToolResult, user_message_override: Optional[str], messages: Messages) -> None:
    # Filter out non-serializable objects
    serializable_input = {}
    for key, value in tool['input'].items():
        try:
            json.dumps(value)  # Test if serializable
            serializable_input[key] = value
        except (TypeError, ValueError):
            serializable_input[key] = f"<non-serializable: {type(value).__name__}>"
    
    user_msg_content: List[ContentBlock] = [
        {"text": (f"agent.tool.{tool['name']} direct tool call.\nInput parameters: {json.dumps(serializable_input)}\n")}
    ]
    # ... rest of the method

Option 2: Use a custom JSON encoder

Create a custom JSON encoder that handles Agent objects and other non-serializable types gracefully.

Option 3: Skip parameter logging for complex objects

Add a flag or configuration to skip detailed parameter logging when complex objects are involved.

Related Issues

No response

Metadata

Metadata

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions