Description
Description
When using litellm
with models accessed via the OpenRouter provider, the supports_response_schema
function currently returns False
.
This happens because OpenRouter is not explicitly listed among the providers that globally support structured outputs (PROVIDERS_GLOBALLY_SUPPORT_RESPONSE_SCHEMA
), and it appears there is no programmatic way via OpenRouter's API to check per-model whether a specific model supports the response_format
parameter. As a result, the check defaults to False
.
This causes issues for applications built on litellm
, such as crewAI
, which rely on this check to determine whether to include the response_format
parameter in the API request. If supports_response_schema
is False
, the response_format
is omitted, breaking functionality that expects structured output.
OpenRouter does support structured outputs for some models that are accessible through their API (e.g., OpenAI's GPT-4o, Fireworks models), as stated in their documentation: https://openrouter.ai/docs#structured-outputs (see the "Model Support" section).
Since litellm
cannot reliably determine per-model support via the OpenRouter API, the current automatic check is insufficient and blocks valid use cases.
Steps to Reproduce
Steps to Reproduce
-
Prerequisites:
- Have Python installed.
- Install
crewai
(specifically version 0.117.0),litellm
, andpydantic
usinguv
(orpip
):uv tool install crewai==0.117.0
- Obtain an OpenRouter API key and set it as an environment variable:
export OPENROUTER_API_KEY='sk-or-...'
-
Create CrewAI Project: Use the CrewAI CLI to create a new project flow:
crewai create flow projectname cd projectname
-
Modify
src/projectname/main.py
: Open thesrc/projectname/main.py
file (or equivalent main entry point in your flow) and make the following changes:a. Initialize the OpenRouter LLM: Replace the default LLM initialization with your OpenRouter configuration. Ensure the
OPENROUTER_API_KEY
environment variable is checked.# Initialize the LLM OPENROUTER_API_KEY = os.getenv("OPENROUTER_API_KEY") mistralLLM: BaseLLM = BaseLLM( model="openrouter/mistralai/mistral-small-3.1-24b-instruct", base_url="https://openrouter.ai/api/v1", api_key=OPENROUTER_API_KEY, temperature=0.0, seed=1984, stream=True, # Add additional params for OpenRouter routing preference additional_params = { "provider": { "order": ["mistralai"], # Note: Use provider name 'mistralai' or 'openai' etc. here, not model names "allow_fallbacks": False, "require_parameters": True, # This requires the provider to support all params sent, including response_format if sent }, } ) llm = mistralLLM # Assign your OpenRouter LLM to 'llm' variable used by agents/tasks
-
Run the Flow: Execute the CrewAI flow using the kickoff command:
crewai flow kickoff
Expected behavior
Allow user to manualy set supports_response_schema
Screenshots/Code snippets
# Initialize the LLM
llm = mistralLLM
llm.response_format = GuideOutline
llm.additional_params = {
"provider": {
"order": ["Mistral"],
"allow_fallbacks": False,
"require_parameters": True,
},
}
Operating System
Windows 11
Python Version
3.12
crewAI Version
0.117.0
crewAI Tools Version
flow
Virtual Environment
Venv
Evidence
PS /path/to/project/> crewai flow kickoff
Running the Flow
╭────────────────────────────────────────────────────────────────── Flow Execution ──────────────────────────────────────────────────────────────────╮
│ │
│ Starting Flow Execution │
│ Name: GuideCreatorFlow │
│ ID: [FLOW_ID] │
│ │
│ │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
🌊 Flow: GuideCreatorFlow
ID: [FLOW_ID]
└── 🧠 Starting Flow...
Flow started with ID: [FLOW_ID]
🌊 Flow: GuideCreatorFlow
ID: [FLOW_ID]
├── 🧠 Starting Flow...
└── 🔄 Running: get_user_input
=== Create Your Comprehensive Guide ===
What topic would you like to create a guide for? gacha games
Who is your target audience? (beginner/intermediate/advanced) beginner
Creating a guide on gacha games for beginner audience...
🌊 Flow: GuideCreatorFlow
ID: [FLOW_ID]
├── Flow Method Step
└── ✅ Completed: get_user_input
🌊 Flow: GuideCreatorFlow
ID: [FLOW_ID]
├── Flow Method Step
├── ✅ Completed: get_user_input
└── 🔄 Running: create_guide_outline
Creating guide outline...
🌊 Flow: GuideCreatorFlow
ID: [FLOW_ID]
├── Flow Method Step
├── ✅ Completed: get_user_input
└── ❌ Failed: create_guide_outline
[Flow._execute_single_listener] Error in method create_guide_outline: The model openrouter/mistralai/mistral-small-3.1-24b-instruct does not support response_format for provider 'openrouter'. Please remove response_format or use a supported model.
Traceback (most recent call last):
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 1030, in _execute_single_listener
listener_result = await self._execute_method(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 876, in _execute_method
raise e
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 846, in _execute_method
else method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/src/alicecrewai/main.py", line 100, in create_guide_outline
response = llm.call(messages=messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/Lib/site-packages/crewai/llm.py", line 857, in call
self._validate_call_params()
File "/path/to/project/.venv/Lib/site-packages/crewai/llm.py", line 999, in _validate_call_params
raise ValueError(
ValueError: The model openrouter/mistralai/mistral-small-3.1-24b-instruct does not support response_format for provider 'openrouter'. Please remove response_format or use a supported model.
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/path/to/project/.venv/Scripts/kickoff.exe/__main__.py", line 10, in <module>
File "/path/to/project/src/alicecrewai/main.py", line 182, in kickoff
GuideCreatorFlow().kickoff()
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 722, in kickoff
return asyncio.run(run_flow())
^^^^^^^^^^^^^^^^^^^^^^^
File "/user/path/AppData/Roaming/uv/python/cpython-3.12.9-windows-x86_64-none/Lib/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/user/path/AppData/Roaming/uv/python/cpython-3.12.9-windows-x86_64-none/Lib/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/user/path/AppData/Roaming/uv/python/cpython-3.12.9-windows-x86_64-none/Lib/asyncio/base_events.py", line 691, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 720, in run_flow
return await self.kickoff_async(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 787, in kickoff_async
await asyncio.gather(*tasks)
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 823, in _execute_start_method
await self._execute_listeners(start_method_name, result)
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 935, in _execute_listeners
await asyncio.gather(*tasks)
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 1030, in _execute_single_listener
listener_result = await self._execute_method(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 876, in _execute_method
raise e
File "/path/to/project/.venv/Lib/site-packages/crewai/flow/flow.py", line 846, in _execute_method
else method(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/src/alicecrewai/main.py", line 100, in create_guide_outline
response = llm.call(messages=messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/path/to/project/.venv/Lib/site-packages/crewai/llm.py", line 857, in call
self._validate_call_params()
File "/path/to/project/.venv/Lib/site-packages/crewai/llm.py", line 999, in _validate_call_params
raise ValueError(
ValueError: The model openrouter/mistralai/mistral-small-3.1-24b-instruct does not support response_format for provider 'openrouter'. Please remove response_format or use a supported model.
An error occurred while running the flow: Command '['uv', 'run', 'kickoff']' returned non-zero exit status 1.
Possible Solution
Suggested Solution
To address this, I propose adding a mechanism to manually override or force the supports_response_schema
check specifically for the OpenRouter provider.
A simple approach could be introducing a configuration option or a flag that users can set when they know their chosen OpenRouter model does support structured output.
For example, within the supports_response_schema
function (or a related configuration layer), a check could be added like:
# Inside litellm.supports_response_schema
def supports_response_schema(
model: str, custom_llm_provider: Optional[str] = None
) -> bool:
# ... (existing get_llm_provider logic) ...
# --- ADDITION START ---
# Check for manual override for OpenRouter
# This assumes a mechanism like `litellm.force_response_schema_support_for_openrouter = True` exists
# Or perhaps a provider-specific flag setting
if custom_llm_provider == litellm.LlmProviders.OPENROUTER:
# Replace this check with the actual configuration mechanism
if getattr(litellm, '_openrouter_force_structured_output', False):
verbose_logger.debug("Manually forcing response schema support for OpenRouter.")
return True
# If no manual override, proceed with existing checks or default behavior
# --- ADDITION END ---
# providers that globally support response schema
PROVIDERS_GLOBALLY_SUPPORT_RESPONSE_SCHEMA = [
litellm.LlmProviders.PREDIBASE,
litellm.LlmProviders.FIREWORKS_AI,
]
if custom_llm_provider in PROVIDERS_GLOBALLY_SUPPORT_RESPONSE_SCHEMA:
return True
# ... (rest of the existing _supports_factory logic) ...
return _supports_factory(
model=model,
custom_llm_provider=custom_llm_provider,
key="supports_response_schema",
)
This would require users to:
- Know that their specific OpenRouter model supports structured output.
- Set a corresponding flag (e.g.,
litellm._openrouter_force_structured_output = True
) before making calls via OpenRouter where structured output is needed.
This manual override would bypass the currently failing automatic check and allow the response_format
parameter to be passed to OpenRouter, enabling structured output functionality for compatible models.
Additional context
None