Skip to content

[BUG] Model's maximum context length causes a crew to throw a python error and stop when scrapping a web page #2696

Closed
@yqup

Description

@yqup

Description

I have noticed in the most recent update crewai 0.114.0 that when a task gets information from a website page that is longer than the LLM token length it causes the agent to through an error..

This has occurred in three crews now so I am creating the issue

This is the CrewAI error "Error during LLM call: litellm.ContextWindowExceededError: litellm.BadRequestError: ContextWindowExceededError: OpenAIException - Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 393828 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}"

this is the python error that causes the agent to stop running

litellm.llms.openai.common_utils.OpenAIError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 393828 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}

Ideally the task should know the maximum token size and truncate the response and let the agent know.

Steps to Reproduce

  1. Create a crew that uses ScrapeWebsiteTool()
  2. run the crew
  3. The error occurs when a page that is longer than the token length is returned

I have been able to fix this by using a 1m token llm. However this is a stop gap.

Expected behavior

I would expect the code to handle the error elegantly

Screenshots/Code snippets

@task
	def links_task(self) -> Task:
		return Task(
			config=self.tasks_config['links_task'],
			tools=[SerperDevTool(), ScrapeWebsiteTool(), CSVSearchTool()],
		)

Operating System

macOS Sonoma

Python Version

3.12

crewAI Version

0.114.0

crewAI Tools Version

0.114.0

Virtual Environment

Venv

Evidence

litellm.llms.openai.common_utils.OpenAIError: Error code: 400 - {'error': {'message': "This model's maximum context length is 128000 tokens. However, your messages resulted in 393828 tokens. Please reduce the length of the messages.", 'type': 'invalid_request_error', 'param': 'messages', 'code': 'context_length_exceeded'}}

Possible Solution

Truncate the web page result based on the llm capability size so we do not get an error

Additional context

none

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions