-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Description
Bug description
During the multi-turn tool-call loop, OpenAiChatModel.createRequest() serializes
"content":"" (empty string) on assistant messages that carry tool_calls. When
using an OpenAI-compatible backend that proxies to Anthropic/Claude (e.g. Heroku
Inference), the API rejects this with HTTP 400:messages[6]: content is required
OpenAI's own API tolerates "content":"", but Anthropic's API requires content to
be either a non-empty string or null. Since spring-ai-starter-model-openai is
designed to work with any OpenAI-compatible endpoint, the serialization should
produce valid payloads for all compliant backends.
Environment
- Spring AI: 1.1.2
- Spring Boot: 3.5.9
- Java: 17
- Backend: Heroku Inference (OpenAI-compatible, proxies to Claude-Opus 4-5)
Steps to reproduce
- Configure
spring-ai-starter-model-openaiwith an Anthropic-compatible endpoint - Register a
@Toolfunction - Send a user message that triggers the model to call the tool
- The model responds with
tool_callsand no text content - Spring AI executes the tool locally, then sends the conversation history
(including the assistant tool_call message) back to the API - HTTP 400:
"messages[N]: content is required"
Expected behavior
The assistant message should have "content": null (not "") when the model
response only contained tool calls and no text. Example:
Minimal Complete Reproducible example
{"role":"assistant","content":null,"tool_calls":[{"id":"...","type":"function","function":{"name":"myTool","arguments":"{}"}}]}
Full stack
org.springframework.ai.retry.NonTransientAiException: HTTP 400 -
{"error":{"code":400,"message":"messages[6]: content is required","type":"invalid_request"}}
at o.s.ai.retry.autoconfigure.SpringAiRetryAutoConfiguration$2.handleError(SpringAiRetryAutoConfiguration.java:126)
at o.s.web.client.ResponseErrorHandler.handleError(ResponseErrorHandler.java:58)