Description
Description
Bug Report: IndexError in litellm when CrewAI Agent with Tools uses Ollama/Qwen
Affected Libraries: crewai
, litellm
LLM Provider: Ollama
Model: qwen3:4b
(or specific Qwen 1.5 variant used)
Description:
When using CrewAI with an agent configured to use an Ollama model (specifically tested with qwen3
) via litellm
, an IndexError: list index out of range
occurs within litellm
's Ollama prompt templating logic. This error specifically happens during the LLM call that follows a successful tool execution by the agent. If the agent does not have tools assigned, the error does not occur.
The error originates in litellm/litellm_core_utils/prompt_templates/factory.py
when attempting to access messages[msg_i].get("tool_calls")
, suggesting an incompatibility in how the message history (including the tool call and its result/observation) is structured or processed for Ollama after a tool run.
Expected Behavior:
The agent should successfully process the tool's output and continue its execution by making the next LLM call without errors.
Actual Behavior:
The script crashes during the LLM call after the tool execution. An IndexError: list index out of range
occurs within litellm
, wrapped in a litellm.exceptions.APIConnectionError
. The Crew/Task fails.
Error Logs / Traceback:
# Include the relevant traceback here, like the one provided:
Traceback (most recent call last):
File "C:\Users\mattv\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\main.py", line 2870, in completion
response = base_llm_http_handler.completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mattv\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\llms\custom_httpx\llm_http_handler.py", line 269, in completion
data = provider_config.transform_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mattv\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\llms\ollama\completion\transformation.py", line 322, in transform_request
modified_prompt = ollama_pt(model=model, messages=messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mattv\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\litellm_core_utils\prompt_templates\factory.py", line 229, in ollama_pt
tool_calls = messages[msg_i].get("tool_calls")
~~~~~~~~^^^^^^^
IndexError: list index out of range
Steps to Reproduce
- Set up CrewAI to use an Ollama model (e.g.,
qwen3
) as the LLM provider vialitellm
. - Define a CrewAI Agent and assign one or more tools (e.g.,
DuckDuckGoSearchTool
) to it using thetools=[...]
parameter. - Define a Task for this agent that requires it to use one of the assigned tools.
- Execute the task using
crew.kickoff()
(or within a CrewAI Flow). - Observe the agent successfully executing the tool.
- Observe the subsequent attempt by CrewAI/
litellm
to make the next LLM call to Ollama (to process the tool results).
Expected behavior
The agent should successfully process the tool's output and continue its execution by making the next LLM call without errors.
Screenshots/Code snippets
# Include the relevant traceback here, like the one provided:
Traceback (most recent call last):
File "C:\Users\mattv\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\main.py", line 2870, in completion
response = base_llm_http_handler.completion(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mattv\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\llms\custom_httpx\llm_http_handler.py", line 269, in completion
data = provider_config.transform_request(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mattv\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\llms\ollama\completion\transformation.py", line 322, in transform_request
modified_prompt = ollama_pt(model=model, messages=messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\mattv\AppData\Local\Programs\Python\Python312\Lib\site-packages\litellm\litellm_core_utils\prompt_templates\factory.py", line 229, in ollama_pt
tool_calls = messages[msg_i].get("tool_calls")
~~~~~~~~^^^^^^^
IndexError: list index out of range
# Potentially include the higher-level CrewAI traceback as well if helpful
### Operating System
Windows 11
### Python Version
3.12
### crewAI Version
0.118.0
### crewAI Tools Version
0.43.0
### Virtual Environment
None
### Evidence
The script crashes during the LLM call *after* the tool execution. An `IndexError: list index out of range` occurs within `litellm`, wrapped in a `litellm.exceptions.APIConnectionError`. The Crew/Task fails.
### Possible Solution
Commenting out or removing the `tools=[...]` list from the Agent's definition prevents this specific `IndexError`.
The agent can then make LLM calls via Ollama/`litellm
### Additional context
- **Python Version:** [3.12.9]
- **crewai Version:** [0.118.0]
- **crewai-tools Version:** [0.43.0]
- **litellm Version:** [1.67.1]
- **Ollama Version:** [6.4.0]
- **LLM Model:** [qwen3:8b, qwen3:4b. qwen3:14b]
- **Operating System:** [Windows 11 Version 24H2 (0S Build 26120.3941)]