Skip to content

[BUG] CrewAI Incorrectly Prepends "models/" Prefix to Gemini Model ID When Using Explicit ChatGoogleGenerativeAI LLM #2645

Closed as not planned
@spectrefelsip

Description

@spectrefelsip

Description

When using CrewAI with a Google Gemini model by explicitly specifying the LLM via a ChatGoogleGenerativeAI object (from langchain-google-genai), CrewAI incorrectly modifies the model identifier before passing it to LiteLLM.

Specifically, if ChatGoogleGenerativeAI is initialized with model="gemini/model-name", during task execution (crew.kickoff()), CrewAI appears to transform this identifier into model="models/gemini/model-name".

This causes the call to litellm.completion to fail, as LiteLLM's get_llm_provider function does not recognize the models/provider/model format and raises a litellm.exceptions.BadRequestError: LLM Provider NOT provided.

It has been verified via a direct LiteLLM test that the call works perfectly if the gemini/model-name identifier and the correct API key are used, isolating the issue to the interaction within CrewAI when the explicit LLM is used.

Steps to Reproduce

Setup Environment:

Create a Python virtual environment.

Install required dependencies: pip install crewai langchain-google-genai google-generativeai litellm python-dotenv (see pip freeze below for exact versions).

Set Environment Variables:

In the terminal, export the API Keys:

      
export GOOGLE_API_KEY='YOUR_GOOGLE_API_KEY'
export GEMINI_API_KEY='YOUR_GEMINI_API_KEY' # Use the same key as GOOGLE_API_KEY
# IMPORTANT: Do NOT set OPENAI_MODEL_NAME for this reproduction!
unset OPENAI_MODEL_NAME
unset OPENAI_API_BASE

Create Python Script (repro_crewai_bug.py):

import os
import traceback
from crewai import Agent, Task, Crew, Process
from langchain_google_genai import ChatGoogleGenerativeAI
# from dotenv import load_dotenv # Uncomment if using .env
# load_dotenv()

# Logging setup (optional, but helpful)
# os.environ['LITELLM_LOG'] = 'DEBUG'

# Verify API Keys
google_api_key = os.getenv("GOOGLE_API_KEY")
gemini_api_key = os.getenv("GEMINI_API_KEY")

if not google_api_key or not gemini_api_key:
   print("🚨 Error: Ensure GOOGLE_API_KEY and GEMINI_API_KEY are defined.")
   exit()
else:
   print("✅ API Keys found.")

# Initialize Explicit LLM
llm = None
try:
   print("⏳ Initializing LLM explicitly...")
   # Use a standard Gemini model for the test
   MODEL_NAME = "gemini/gemini-2.0-flash"
   print(f"   Model to use: {MODEL_NAME}")

   llm = ChatGoogleGenerativeAI(
       model=MODEL_NAME,
       verbose=True,
       temperature=0.5,
       google_api_key=google_api_key
   )
   print("✅ LLM initialized explicitly.")

except Exception as e:
   print(f"🚨 FATAL: Error initializing ChatGoogleGenerativeAI: {e}")
   traceback.print_exc()
   exit()

# Define Agent (Passing explicit LLM)
try:
   print("⏳ Defining the agent...")
   researcher = Agent(
       role='Simple Researcher',
       goal='Explain something',
       backstory='Expert explainer.',
       verbose=True,
       allow_delegation=False,
       llm=llm # <--- Passing the LLM object
   )
   print("✅ Agent defined successfully.")
except Exception as e:
   print(f"🚨 FATAL: Error defining Agent: {e}")
   traceback.print_exc()
   exit()

# Define Task
task = Task(
   description='Explain photosynthesis in one sentence.',
   expected_output='A clear sentence.',
   agent=researcher
)

# Create and Run Crew
try:
   print("⏳ Creating and running the Crew...")
   simple_crew = Crew(agents=[researcher], tasks=[task], verbose=True)
   result = simple_crew.kickoff()
   print("\n✅ Execution completed.")
   print("Result:", result)

except Exception as e:
   print(f"\n🚨 FATAL: Error during kickoff(): {type(e).__name__}")
   print(f"   Message: {e}")
   print("\n--- Traceback ---")
   traceback.print_exc()
   print("-----------------")

Run the Script:

python repro_crewai_bug.py

Expected behavior

The Crew should execute successfully, contact the Gemini API via LiteLLM using the gemini/gemini-2.0-flash identifier, and return the model's response.

Screenshots/Code snippets

import os
import traceback
from crewai import Agent, Task, Crew, Process
from langchain_google_genai import ChatGoogleGenerativeAI
# from dotenv import load_dotenv # Uncomment if using .env
# load_dotenv()

# Logging setup (optional, but helpful)
# os.environ['LITELLM_LOG'] = 'DEBUG'

# Verify API Keys
google_api_key = os.getenv("GOOGLE_API_KEY")
gemini_api_key = os.getenv("GEMINI_API_KEY")

if not google_api_key or not gemini_api_key:
    print("🚨 Error: Ensure GOOGLE_API_KEY and GEMINI_API_KEY are defined.")
    exit()
else:
    print("✅ API Keys found.")

# Initialize Explicit LLM
llm = None
try:
    print("⏳ Initializing LLM explicitly...")
    # Use a standard Gemini model for the test
    MODEL_NAME = "gemini/gemini-2.0-flash"
    print(f"   Model to use: {MODEL_NAME}")

    llm = ChatGoogleGenerativeAI(
        model=MODEL_NAME,
        verbose=True,
        temperature=0.5,
        google_api_key=google_api_key
    )
    print("✅ LLM initialized explicitly.")

except Exception as e:
    print(f"🚨 FATAL: Error initializing ChatGoogleGenerativeAI: {e}")
    traceback.print_exc()
    exit()

# Define Agent (Passing explicit LLM)
try:
    print("⏳ Defining the agent...")
    researcher = Agent(
        role='Simple Researcher',
        goal='Explain something',
        backstory='Expert explainer.',
        verbose=True,
        allow_delegation=False,
        llm=llm # <--- Passing the LLM object
    )
    print("✅ Agent defined successfully.")
except Exception as e:
    print(f"🚨 FATAL: Error defining Agent: {e}")
    traceback.print_exc()
    exit()

# Define Task
task = Task(
    description='Explain photosynthesis in one sentence.',
    expected_output='A clear sentence.',
    agent=researcher
)

# Create and Run Crew
try:
    print("⏳ Creating and running the Crew...")
    simple_crew = Crew(agents=[researcher], tasks=[task], verbose=True)
    result = simple_crew.kickoff()
    print("\n✅ Execution completed.")
    print("Result:", result)

except Exception as e:
    print(f"\n🚨 FATAL: Error during kickoff(): {type(e).__name__}")
    print(f"   Message: {e}")
    print("\n--- Traceback ---")
    traceback.print_exc()
    print("-----------------")

Operating System

Ubuntu 24.04

Python Version

3.12

crewAI Version

0.114.0

crewAI Tools Version

0.440.1

Virtual Environment

Venv

Evidence

(crewai_envir) spectre@spectreROGStrix:~$ python repro_crewai_bug.py 
✅ API Keys found.
⏳ Initializing LLM explicitly...
   Model to use: gemini/gemini-2.0-flash
✅ LLM initialized explicitly.
⏳ Defining the agent...

Provider List: https://docs.litellm.ai/docs/providers

✅ Agent defined successfully.
⏳ Creating and running the Crew...

Provider List: https://docs.litellm.ai/docs/providers

╭───────────────────────────────────────────────────────────── Crew Execution Started ──────────────────────────────────────────────────────────────╮
│                                                                                                                                                   │
│  Crew Execution Started                                                                                                                           │
│  Name: crew                                                                                                                                       │
│  ID: 7adef42d-c3e1-4851-87be-1671d5cea7a2                                                                                                         │
│                                                                                                                                                   │
│                                                                                                                                                   │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯


Provider List: https://docs.litellm.ai/docs/providers

🚀 Crew: crew
└── 📋 Task: 3b83abbe-fdcd-48b0-ba80-bdaee53dab55
       Status: Executing Task...


Provider List: https://docs.litellm.ai/docs/providers

🚀 Crew: crew
└── 📋 Task: 3b83abbe-fdcd-48b0-ba80-bdaee53dab55
       Status: Executing Task...
    └── 🤖 Agent: Simple Researcher
            Status: In Progress

# Agent: Simple Researcher
## Task: Explain photosynthesis in one sentence.
🤖 Agent: Simple Researcher
    Status: In Progress
└── 🧠 Thinking...


Provider List: https://docs.litellm.ai/docs/providers

🚀 Crew: crew
└── 📋 Task: 3b83abbe-fdcd-48b0-ba80-bdaee53dab55
       Status: Executing Task...
    └── 🤖 Agent: Simple Researcher
            Status: In Progress
        └── ❌ LLM Failed

╭──────────────────────────────────────────────────────────────────── LLM Error ────────────────────────────────────────────────────────────────────╮
│                                                                                                                                                   │
│  ❌ LLM Call Failed                                                                                                                               │
│  Error: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed                           │
│  model=models/gemini/gemini-2.0-flash                                                                                                             │
│   Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more:                     │
│  https://docs.litellm.ai/docs/providers                                                                                                           │
│                                                                                                                                                   │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

ERROR:root:LiteLLM call failed: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=models/gemini/gemini-2.0-flash
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
 Error during LLM call: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=models/gemini/gemini-2.0-flash
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
 An unknown error occurred. Please check the details below.
 Error details: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=models/gemini/gemini-2.0-flash
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
🚀 Crew: crew
└── 📋 Task: 3b83abbe-fdcd-48b0-ba80-bdaee53dab55
       Assigned to: Simple Researcher
       Status: ❌ Failed
    └── 🤖 Agent: Simple Researcher
            Status: In Progress
        └── ❌ LLM Failed
╭────────────────────────────────────────────────────────────────── Task Failure ───────────────────────────────────────────────────────────────────╮
│                                                                                                                                                   │
│  Task Failed                                                                                                                                      │
│  Name: 3b83abbe-fdcd-48b0-ba80-bdaee53dab55                                                                                                       │
│  Agent: Simple Researcher                                                                                                                         │
│                                                                                                                                                   │
│                                                                                                                                                   │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

╭────────────────────────────────────────────────────────────────── Crew Failure ───────────────────────────────────────────────────────────────────╮
│                                                                                                                                                   │
│  Crew Execution Failed                                                                                                                            │
│  Name: crew                                                                                                                                       │
│  ID: 7adef42d-c3e1-4851-87be-1671d5cea7a2                                                                                                         │
│                                                                                                                                                   │
│                                                                                                                                                   │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯


🚨 FATAL: Error during kickoff(): BadRequestError
   Message: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=models/gemini/gemini-2.0-flash
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers

--- Traceback ---
Traceback (most recent call last):
  File "/home/spectre/repro_crewai_bug.py", line 70, in <module>
    result = simple_crew.kickoff()
             ^^^^^^^^^^^^^^^^^^^^^
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/crewai/crew.py", line 646, in kickoff
    result = self._run_sequential_process()
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/crewai/crew.py", line 758, in _run_sequential_process
    return self._execute_tasks(self.tasks)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/crewai/crew.py", line 861, in _execute_tasks
    task_output = task.execute_sync(
                  ^^^^^^^^^^^^^^^^^^
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/crewai/task.py", line 328, in execute_sync
    return self._execute_core(agent, context, tools)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/crewai/task.py", line 472, in _execute_core
    raise e  # Re-raise the exception after emitting the event
    ^^^^^^^
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/crewai/task.py", line 392, in _execute_core
    result = agent.execute_task(
             ^^^^^^^^^^^^^^^^^^^
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/crewai/agent.py", line 269, in execute_task
    raise e
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/crewai/agent.py", line 250, in execute_task
    result = self.agent_executor.invoke(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 123, in invoke
    raise e
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 112, in invoke
    formatted_answer = self._invoke_loop()
                       ^^^^^^^^^^^^^^^^^^^
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 208, in _invoke_loop
    raise e
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/crewai/agents/crew_agent_executor.py", line 155, in _invoke_loop
    answer = get_llm_response(
             ^^^^^^^^^^^^^^^^^
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/crewai/utilities/agent_utils.py", line 157, in get_llm_response
    raise e
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/crewai/utilities/agent_utils.py", line 148, in get_llm_response
    answer = llm.call(
             ^^^^^^^^^
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/crewai/llm.py", line 794, in call
    return self._handle_non_streaming_response(
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/crewai/llm.py", line 630, in _handle_non_streaming_response
    response = litellm.completion(**params)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/litellm/utils.py", line 1154, in wrapper
    raise e
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/litellm/utils.py", line 1032, in wrapper
    result = original_function(*args, **kwargs)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/litellm/main.py", line 3068, in completion
    raise exception_type(
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/litellm/main.py", line 979, in completion
    model, custom_llm_provider, dynamic_api_key, api_base = get_llm_provider(
                                                            ^^^^^^^^^^^^^^^^^
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 356, in get_llm_provider
    raise e
  File "/home/spectre/crewai_envir/lib/python3.12/site-packages/litellm/litellm_core_utils/get_llm_provider_logic.py", line 333, in get_llm_provider
    raise litellm.exceptions.BadRequestError(  # type: ignore
litellm.exceptions.BadRequestError: litellm.BadRequestError: LLM Provider NOT provided. Pass in the LLM provider you are trying to call. You passed model=models/gemini/gemini-2.0-flash
 Pass model as E.g. For 'Huggingface' inference endpoints pass in `completion(model='huggingface/starcoder',..)` Learn more: https://docs.litellm.ai/docs/providers
-----------------

Possible Solution

The only method that seems to work reliably is not passing an explicit llm object to the Agent and instead, configuring CrewAI to infer the LLM via the following environment variables:

export GOOGLE_API_KEY='YOUR_API_KEY'
export GEMINI_API_KEY='YOUR_API_KEY'
export OPENAI_MODEL_NAME='gemini/gemini-2.0-flash' # Or the desired model
export OPENAI_API_BASE='https://api.example.com/v1' # To prevent attempts via OpenAI

With this environment variable configuration, the execution succeeds.

Additional context

import os
import litellm
import traceback

os.environ['LITELLM_LOG'] = 'DEBUG'

print("Probando llamada directa con LiteLLM...")

try:
    api_key = os.getenv("GEMINI_API_KEY")
    if not api_key:
         print("Error: GEMINI_API_KEY no definida.")
         exit()

    print(f"Usando modelo: gemini/gemini-2.0-flash") # El modelo que usas
    print(f"Usando API Key: ...{api_key[-4:]}")

    response = litellm.completion(
        model="gemini/gemini-2.0-flash", 
        messages=[{"role": "user", "content": "Hola, ¿quién eres?"}],
        api_key=api_key
    )
    print("\n--- Respuesta LiteLLM ---")
    print(response)
    print("------------------------")

except Exception as e:
    print(f"\n🚨 Error en llamada directa LiteLLM: {e}")
    print("--- Traceback ---")
    traceback.print_exc()
    print("---------------")``

pip freeze

aiohappyeyeballs==2.6.1
aiohttp==3.11.16
aiosignal==1.3.2
alembic==1.15.2
annotated-types==0.7.0
anyio==4.9.0
appdirs==1.4.4
asgiref==3.8.1
asttokens==3.0.0
attrs==25.3.0
auth0-python==4.9.0
backoff==2.2.1
bcrypt==4.3.0
beautifulsoup4==4.13.4
blinker==1.9.0
build==1.2.2.post1
cachetools==5.5.2
certifi==2025.1.31
cffi==1.17.1
charset-normalizer==3.4.1
chroma-hnswlib==0.7.6
chromadb==0.5.23
click==8.1.8
cohere==5.15.0
coloredlogs==15.0.1
crewai==0.114.0
crewai-tools==0.40.1
cryptography==44.0.2
dataclasses-json==0.6.7
decorator==5.2.1
Deprecated==1.2.18
deprecation==2.1.0
distro==1.9.0
docker==7.1.0
docstring_parser==0.16
durationpy==0.9
embedchain==0.1.128
et_xmlfile==2.0.0
executing==2.2.0
fastapi==0.115.9
fastavro==1.10.0
filelock==3.18.0
filetype==1.2.0
flatbuffers==25.2.10
frozenlist==1.6.0
fsspec==2025.3.2
google-ai-generativelanguage==0.6.15
google-api-core==2.24.2
google-api-python-client==2.167.0
google-auth==2.39.0
google-auth-httplib2==0.2.0
google-generativeai==0.8.5
googleapis-common-protos==1.70.0
gptcache==0.1.44
greenlet==3.2.0
grpcio==1.72.0rc1
grpcio-status==1.71.0
grpcio-tools==1.71.0
h11==0.14.0
h2==4.2.0
hpack==4.1.0
httpcore==1.0.8
httplib2==0.22.0
httptools==0.6.4
httpx==0.27.2
httpx-sse==0.4.0
huggingface-hub==0.30.2
humanfriendly==10.0
hyperframe==6.1.0
idna==3.10
importlib_metadata==8.6.1
importlib_resources==6.5.2
instructor==1.7.9
ipython==9.1.0
ipython_pygments_lexers==1.1.1
jedi==0.19.2
Jinja2==3.1.6
jiter==0.8.2
json5==0.12.0
json_repair==0.41.1
jsonpatch==1.33
jsonpickle==4.0.5
jsonpointer==3.0.0
jsonref==1.1.0
jsonschema==4.23.0
jsonschema-specifications==2024.10.1
kubernetes==32.0.1
lancedb==0.21.2
langchain==0.3.23
langchain-cohere==0.3.5
langchain-community==0.3.21
langchain-core==0.3.54
langchain-experimental==0.3.4
langchain-google-genai==2.0.10
langchain-openai==0.2.14
langchain-text-splitters==0.3.8
langsmith==0.3.32
litellm==1.60.2
Mako==1.3.10
markdown-it-py==3.0.0
MarkupSafe==3.0.2
marshmallow==3.26.1
matplotlib-inline==0.1.7
mdurl==0.1.2
mem0ai==0.1.92
mmh3==5.1.0
monotonic==1.6
mpmath==1.3.0
multidict==6.4.3
mypy-extensions==1.0.0
networkx==3.4.2
nodeenv==1.9.1
numpy==2.2.4
oauthlib==3.2.2
onnxruntime==1.21.1
openai==1.75.0
openpyxl==3.1.5
opentelemetry-api==1.32.1
opentelemetry-exporter-otlp-proto-common==1.32.1
opentelemetry-exporter-otlp-proto-grpc==1.32.1
opentelemetry-exporter-otlp-proto-http==1.32.1
opentelemetry-instrumentation==0.53b1
opentelemetry-instrumentation-asgi==0.53b1
opentelemetry-instrumentation-fastapi==0.53b1
opentelemetry-proto==1.32.1
opentelemetry-sdk==1.32.1
opentelemetry-semantic-conventions==0.53b1
opentelemetry-util-http==0.53b1
orjson==3.10.16
overrides==7.7.0
packaging==24.2
pandas==2.2.3
parso==0.8.4
pdfminer.six==20250327
pdfplumber==0.11.6
pexpect==4.9.0
pillow==11.2.1
portalocker==2.10.1
posthog==3.25.0
prompt_toolkit==3.0.51
propcache==0.3.1
proto-plus==1.26.1
protobuf==5.29.4
psycopg2-binary==2.9.10
ptyprocess==0.7.0
pure_eval==0.2.3
pyarrow==19.0.1
pyasn1==0.6.1
pyasn1_modules==0.4.2
pycparser==2.22
pydantic==2.11.3
pydantic-settings==2.9.1
pydantic_core==2.33.1
Pygments==2.19.1
PyJWT==2.10.1
pyparsing==3.2.3
pypdf==5.4.0
pypdfium2==4.30.1
PyPika==0.48.9
pyproject_hooks==1.2.0
pyright==1.1.399
pysbd==0.3.4
python-dateutil==2.9.0.post0
python-dotenv==1.1.0
pytube==15.0.0
pytz==2024.2
pyvis==0.3.2
PyYAML==6.0.2
qdrant-client==1.13.3
referencing==0.36.2
regex==2024.11.6
requests==2.32.3
requests-oauthlib==2.0.0
requests-toolbelt==1.0.0
rich==13.9.4
rpds-py==0.24.0
rsa==4.9.1
schema==0.7.7
setuptools==78.1.0
shellingham==1.5.4
six==1.17.0
sniffio==1.3.1
soupsieve==2.6
SQLAlchemy==2.0.40
stack-data==0.6.3
starlette==0.45.3
sympy==1.13.3
tabulate==0.9.0
tenacity==9.1.2
tiktoken==0.9.0
tokenizers==0.20.3
tomli==2.2.1
tomli_w==1.2.0
tqdm==4.67.1
traitlets==5.14.3
typer==0.15.2
types-requests==2.32.0.20250328
typing-inspect==0.9.0
typing-inspection==0.4.0
typing_extensions==4.13.2
tzdata==2025.2
uritemplate==4.1.1
urllib3==2.4.0
uv==0.6.14
uvicorn==0.34.2
uvloop==0.21.0
watchfiles==1.0.5
wcwidth==0.2.13
websocket-client==1.8.0
websockets==15.0.1
wrapt==1.17.2
yarl==1.20.0
zipp==3.21.0
zstandard==0.23.0

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions