-
-
Notifications
You must be signed in to change notification settings - Fork 523
Description
What is the problem this feature would solve?
Currently, @effect/ai-anthropic's AnthropicLanguageModel emulates structured output (generateObject) by creating a forced tool call — it wraps the schema as a tool's input_schema and sets tool_choice: { type: "tool", name } (see packages/ai/anthropic/src/AnthropicLanguageModel.ts lines ~336-348).
Anthropic has since released native Structured Outputs (now GA), which provides grammar-constrained JSON generation via the output_format parameter. This native approach:
- Guarantees JSON Schema compliance via constrained decoding at the token level
- Eliminates the tool-call workaround and its edge cases (e.g., the prompt ordering bug in @effect/ai Ordering bug in
Prompt.fromResponsePartscauses Anthropic to return 400 Status #5678) - Supports Claude Sonnet 4.5 and Opus 4.1+
- Is the officially recommended approach per Anthropic's docs
The current tool-call emulation is less reliable, doesn't benefit from constrained decoding, and diverges from how @effect/ai-openai handles structured output (which already uses OpenAI's native response_format: { type: "json_schema" }).
What is the feature you are proposing to solve the problem?
Update AnthropicLanguageModel to use Anthropic's native output_format parameter when responseFormat.type === "json":
- Map
ProviderOptions.responseFormatto Anthropic'soutput_format: { type: "json_schema", json_schema: { name, schema } }instead of creating a forced tool call - Update the generated API types (
Generated.ts) if they don't yet include theoutput_formatfield (may require regenerating from updated Anthropic API specs) - The
LanguageModel.ProviderOptions.responseFormatalready carries{ type: "json", objectName, schema }— the mapping is straightforward - Reference:
@effect/ai-openai'sprepareResponseFormat()function already does the equivalent for OpenAI
Anthropic API docs: https://docs.anthropic.com/en/docs/build-with-claude/structured-outputs
What alternatives have you considered?
- Keep current tool-call emulation: Works but doesn't benefit from constrained decoding, more fragile with prompt ordering
- User-side workaround: Could bypass
generateObjectand call the Anthropic API directly withoutput_format, but this defeats the purpose of the provider abstraction
Related issues:
- From Discord: Dealing with Unsupported JSON Schema Keywords in OpenAI Structured Calls #4563 (JSON Schema keyword stripping) — Anthropic's constrained decoding may have its own keyword restrictions worth considering
- @effect/ai Ordering bug in
Prompt.fromResponsePartscauses Anthropic to return 400 Status #5678 (Prompt ordering bug) — partially caused by the tool-call emulation approach