Skip to content

[BUG] Expected thinking or redacted_thinking, but found text when using Claude Sonnet 3.7 Thinking #2323

Closed
@noahzuiuc

Description

@noahzuiuc

Description

litellm.exceptions.BadRequestError: litellm.BadRequestError: AnthropicException - {"type":"error","error":{"type":"invalid_request_error","message":"messages.1.content.0.type: Expected thinkingorredacted_thinking, but found text. When thinkingis enabled, a finalassistantmessage must start with a thinking block. We recommend you include thinking blocks from previous turns. To avoid this requirement, disablethinking. Please consult our documentation at https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking"}}

This error is thrown during crew kickoff. I tested litellm on its own and everything works as expected. I have defined the llm backend for my agent as
LLM(model=f"{<provider>}/{<model_name>}", base_url=<base_url>, api_key=<api_key>, temperature=1, max_tokens=24000, thinking={ "type": "enabled", "budget_tokens": 16000 })

Steps to Reproduce

Run CrewAI with Claude Sonnet 3.7 Thinking as the backend

Expected behavior

The crew should run without errors

Screenshots/Code snippets

litellm.exceptions.BadRequestError: litellm.BadRequestError: AnthropicException - {"type":"error","error":{"type":"invalid_request_error","message":"messages.1.content.0.type: Expected thinkingorredacted_thinking, but found text. When thinkingis enabled, a finalassistantmessage must start with a thinking block. We recommend you include thinking blocks from previous turns. To avoid this requirement, disablethinking. Please consult our documentation at https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking"}}

Operating System

Other (specify in additional context)

Python Version

3.12

crewAI Version

0.105.0

crewAI Tools Version

0.37.0

Virtual Environment

Venv

Evidence

ERROR:root:LiteLLM call failed: litellm.BadRequestError: AnthropicException - {"type":"error","error":{"type":"invalid_request_error","message":"messages.1.content.0.type: Expected thinking or redacted_thinking, but found text. When thinking is enabled, a final assistant message must start with a thinking block. We recommend you include thinking blocks from previous turns. To avoid this requirement, disable thinking. Please consult our documentation at https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking"}}
Error during LLM call: litellm.BadRequestError: AnthropicException - {"type":"error","error":{"type":"invalid_request_error","message":"messages.1.content.0.type: Expected thinking or redacted_thinking, but found text. When thinking is enabled, a final assistant message must start with a thinking block. We recommend you include thinking blocks from previous turns. To avoid this requirement, disable thinking. Please consult our documentation at https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking"}}
Traceback (most recent call last):
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/litellm/llms/anthropic/chat/handler.py", line 412, in completion
response = client.post(
^^^^^^^^^^^^
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/http_handler.py", line 553, in post
raise e
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/http_handler.py", line 534, in post
response.raise_for_status()
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/httpx/_models.py", line 763, in raise_for_status
raise HTTPStatusError(message, request=request, response=self)
httpx.HTTPStatusError: Client error '400 Bad Request' for url 'https://api.anthropic.com/v1/messages'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/400

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/litellm/main.py", line 1846, in completion
response = anthropic_chat_completions.completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/litellm/llms/anthropic/chat/handler.py", line 427, in completion
raise AnthropicError(
litellm.llms.anthropic.common_utils.AnthropicError: {"type":"error","error":{"type":"invalid_request_error","message":"messages.1.content.0.type: Expected thinking or redacted_thinking, but found text. When thinking is enabled, a final assistant message must start with a thinking block. We recommend you include thinking blocks from previous turns. To avoid this requirement, disable thinking. Please consult our documentation at https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking"}}

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/Users/noahz/Documents/lumyn/.venv/bin/run_crew", line 10, in
sys.exit(run())
^^^^^
File "/Users/noahz/Documents/lumyn/src/lumyn/main.py", line 140, in run
LumynCrew().crew().kickoff(inputs=inputs)
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/agent_analytics/instrumentation/traceloop/sdk/tracing/opentelemetry_instrumentation_crewai/patch.py", line 91, in traced_method
result = wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/crewai/crew.py", line 576, in kickoff
result = self._run_sequential_process()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/crewai/crew.py", line 683, in _run_sequential_process
return self._execute_tasks(self.tasks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/crewai/crew.py", line 781, in _execute_tasks
task_output = task.execute_sync(
^^^^^^^^^^^^^^^^^^
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/agent_analytics/instrumentation/traceloop/sdk/tracing/opentelemetry_instrumentation_crewai/patch.py", line 91, in traced_method
result = wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/crewai/task.py", line 302, in execute_sync
return self._execute_core(agent, context, tools)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/crewai/task.py", line 366, in _execute_core
result = agent.execute_task(
^^^^^^^^^^^^^^^^^^^
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/agent_analytics/instrumentation/traceloop/sdk/tracing/opentelemetry_instrumentation_crewai/patch.py", line 91, in traced_method
result = wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/crewai/agent.py", line 254, in execute_task
raise e
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/crewai/agent.py", line 243, in execute_task
result = self.agent_executor.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 112, in invoke
raise e
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 102, in invoke
formatted_answer = self._invoke_loop()
^^^^^^^^^^^^^^^^^^^
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 160, in _invoke_loop
raise e
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 140, in _invoke_loop
answer = self._get_llm_response()
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 210, in _get_llm_response
raise e
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 201, in _get_llm_response
answer = self.llm.call(
^^^^^^^^^^^^^^
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/crewai/llm.py", line 291, in call
response = litellm.completion(**params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/agent_analytics/instrumentation/traceloop/sdk/tracing/opentelemetry_instrumentation_litellm/patch.py", line 291, in traced_method
result = wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1154, in wrapper
raise e
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1032, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/litellm/main.py", line 3068, in completion
raise exception_type(
^^^^^^^^^^^^^^^
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2201, in exception_type
raise e
File "/Users/noahz/Documents/lumyn/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 526, in exception_type
raise BadRequestError(
litellm.exceptions.BadRequestError: litellm.BadRequestError: AnthropicException - {"type":"error","error":{"type":"invalid_request_error","message":"messages.1.content.0.type: Expected thinking or redacted_thinking, but found text. When thinking is enabled, a final assistant message must start with a thinking block. We recommend you include thinking blocks from previous turns. To avoid this requirement, disable thinking. Please consult our documentation at https://docs.anthropic.com/en/docs/build-with-claude/extended-thinking"}}
An error occurred while running the crew: Command '['uv', 'run', 'run_crew']' returned non-zero exit status 1.

Possible Solution

None

Additional context

MacOS Sequoia

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions