Description
Expected Behavior
JdbcChatMemoryRepository
should persist and restore the tool_calls
field for AssistantMessage
s that use tool calling (i.e., function-calling).
Specifically:
- Add a
tool_calls TEXT NULL
column to the schema - Serialize the list of
ToolCall
objects to JSON on insert - Deserialize and rehydrate
AssistantMessage(null, toolCalls)
when loading from the database
This ensures that responses such as:
new AssistantMessage(null, toolCalls)
...can be stored and later reused without violating OpenAI's API requirements.
Current Behavior
Currently, JdbcChatMemoryRepository
only stores the content
field. When an AssistantMessage
has content = null
and non-empty toolCalls
, the following happens:
- The
toolCalls
are silently discarded during persistence - Upon loading, the message becomes:
content = null
,toolCalls = []
- This leads to a malformed payload when used in a follow-up prompt:
{ "role": "assistant", "content": null }
Which violates the OpenAI API spec:
If
content
is null, a non-emptytool_calls
array must be present
Resulting in:
"error": {
"message": "Invalid value for 'content': expected a string, got null.",
"type": "invalid_request_error",
"param": "messages.[3].content",
"code": null
}
Context
Tool calling is a core feature of OpenAI and Azure OpenAI LLMs. Without persisting tool_calls
, JDBC-based chat memory implementations cannot reconstruct valid assistant messages, causing:
- 400 API errors
- Broken multi-turn flows
- Prompt history loss
This makes JdbcChatMemoryRepository
incompatible with modern LLM usage patterns.
Alternatives considered
- In-memory chat memory works correctly (retains toolCalls)
- Custom JDBC memory with JSON serialization (invasive)
- Skipping such messages entirely (causes context loss)
Workaround
Avoid using JdbcChatMemoryRepository
when tool calling is enabled. Use in-memory fallback instead.
Related