You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/tarik/AI-WS/vllm/vllm/engine/multiprocessing/engine.py", line 397, in run_mp_engine
engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
File "/home/tarik/AI-WS/vllm/vllm/engine/multiprocessing/engine.py", line 140, in from_engine_args
engine_config = engine_args.create_engine_config()
File "/home/tarik/AI-WS/vllm/vllm/engine/arg_utils.py", line 930, in create_engine_config
model_config = self.create_model_config()
File "/home/tarik/AI-WS/vllm/vllm/engine/arg_utils.py", line 864, in create_model_config
return ModelConfig(
File "/home/tarik/AI-WS/vllm/vllm/config.py", line 224, in init
supported_tasks, task = self._resolve_task(task, self.hf_config)
File "/home/tarik/AI-WS/vllm/vllm/config.py", line 307, in _resolve_task
raise ValueError(msg)
ValueError: This model does not support the 'embedding' task. Supported tasks: {'generate'}
please add embedding' task support
thank you
Alternatives
No response
Additional context
No response
Before submitting a new issue...
Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
🚀 The feature, motivation and pitch
Using Qwen2.5 model : ValueError: This model does not support the 'embedding' task. Supported tasks: {'generate'}
reproduction :
python -m vllm.entrypoints.openai.api_server --served-model-name Qwen2.5-1.5B-Instruct --model Qwen/Qwen2.5-1.5B-Instruct --max_model_len 17000 --dtype float16 --trust_remote_code --host 127.0.0.1 --port 8080 --uvicorn-log-level debug --api-key gUaq3eYuYZjuBfejwH-lVrFAlTbi9g3vQnRZD4jBCYA --task "embedding"
Qwen2.5 model is recognized as "Qwen2ForCausalLM": ("qwen2", "Qwen2ForCausalLM")
Qwen2Config {
"_name_or_path": "Qwen/Qwen2.5-1.5B-Instruct",
"architectures": [
"Qwen2ForCausalLM"
],
but in the Embedding model section it is not registred.
in config file : vllm/vllm/model_executor/models/registry.py
_EMBEDDING_MODELS = {
# [Text-only]
"BertModel": ("bert", "BertEmbeddingModel"),
"Gemma2Model": ("gemma2", "Gemma2EmbeddingModel"),
"MistralModel": ("llama", "LlamaEmbeddingModel"),
"Qwen2ForRewardModel": ("qwen2_rm", "Qwen2ForRewardModel"),
"Qwen2ForSequenceClassification": (
"qwen2_cls", "Qwen2ForSequenceClassification"),
# [Multimodal]
"LlavaNextForConditionalGeneration": ("llava_next", "LlavaNextForConditionalGeneration"), # noqa: E501
"Phi3VForCausalLM": ("phi3v", "Phi3VForCausalLM"),
}
Traceback (most recent call last):
File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
self.run()
File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/tarik/AI-WS/vllm/vllm/engine/multiprocessing/engine.py", line 397, in run_mp_engine
engine = MQLLMEngine.from_engine_args(engine_args=engine_args,
File "/home/tarik/AI-WS/vllm/vllm/engine/multiprocessing/engine.py", line 140, in from_engine_args
engine_config = engine_args.create_engine_config()
File "/home/tarik/AI-WS/vllm/vllm/engine/arg_utils.py", line 930, in create_engine_config
model_config = self.create_model_config()
File "/home/tarik/AI-WS/vllm/vllm/engine/arg_utils.py", line 864, in create_model_config
return ModelConfig(
File "/home/tarik/AI-WS/vllm/vllm/config.py", line 224, in init
supported_tasks, task = self._resolve_task(task, self.hf_config)
File "/home/tarik/AI-WS/vllm/vllm/config.py", line 307, in _resolve_task
raise ValueError(msg)
ValueError: This model does not support the 'embedding' task. Supported tasks: {'generate'}
please add embedding' task support
thank you
Alternatives
No response
Additional context
No response
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: