Closed
Description
Your current environment
Not relevant
🐛 Describe the bug
When you follow the official docs for Qwen-2.5-VL
, you get the following error:
Traceback (most recent call last):
File "/myenv/bin/vllm", line 8, in <module>
sys.exit(main())
^^^^^^
File "/myenv/lib/python3.12/site-packages/vllm/scripts.py", line 204, in main
args.dispatch_function(args)
File "/myenv/lib/python3.12/site-packages/vllm/scripts.py", line 44, in serve
uvloop.run(run_server(args))
File "/myenv/lib/python3.12/site-packages/uvloop/__init__.py", line 109, in run
return __asyncio.run(
^^^^^^^^^^^^^^
File "/root/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/root/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
File "/myenv/lib/python3.12/site-packages/uvloop/__init__.py", line 61, in wrapper
return await main
^^^^^^^^^^
File "/myenv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 875, in run_server
async with build_async_engine_client(args) as engine_client:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/myenv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 136, in build_async_engine_client
async with build_async_engine_client_from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/contextlib.py", line 210, in __aenter__
return await anext(self.gen)
^^^^^^^^^^^^^^^^^^^^^
File "/myenv/lib/python3.12/site-packages/vllm/entrypoints/openai/api_server.py", line 160, in build_async_engine_client_from_engine_args
engine_client = AsyncLLMEngine.from_engine_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/myenv/lib/python3.12/site-packages/vllm/v1/engine/async_llm.py", line 104, in from_engine_args
vllm_config = engine_args.create_engine_config(usage_context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/myenv/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 1075, in create_engine_config
model_config = self.create_model_config()
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/myenv/lib/python3.12/site-packages/vllm/engine/arg_utils.py", line 998, in create_model_config
return ModelConfig(
^^^^^^^^^^^^
File "/myenv/lib/python3.12/site-packages/vllm/config.py", line 364, in __init__
self.multimodal_config = self._init_multimodal_config(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/myenv/lib/python3.12/site-packages/vllm/config.py", line 424, in _init_multimodal_config
if ModelRegistry.is_multimodal_model(architectures):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/myenv/lib/python3.12/site-packages/vllm/model_executor/models/registry.py", line 445, in is_multimodal_model
model_cls, _ = self.inspect_model_cls(architectures)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/myenv/lib/python3.12/site-packages/vllm/model_executor/models/registry.py", line 405, in inspect_model_cls
return self._raise_for_unsupported(architectures)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/myenv/lib/python3.12/site-packages/vllm/model_executor/models/registry.py", line 357, in _raise_for_unsupported
raise ValueError(
ValueError: Model architectures ['Qwen2_5_VLForConditionalGeneration'] failed to be inspected. Please check the logs for more details.
This is due to the fact that we're depending on the dev branch for transformers:
pip install git+https://github.com/huggingface/transformers
which has just merged a breaking change:
huggingface/transformers@33d1d71
Workaround
Until this is fixed, use this instead:
pip install --upgrade git+https://github.com/huggingface/transformers.git@336dc69d63d56f232a183a3e7f52790429b871ef
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.