-
-
Notifications
You must be signed in to change notification settings - Fork 9.1k
Closed
Labels
usageHow to use vllmHow to use vllm

Description
Your current environment
The output of `python collect_env.py`
How would you like to use vllm
Hi there!
When I try to run the model with vLLM, I get this error: ModuleNotFoundError: No module named ‘triton’.
My local environment is macOS and I have successfully installed vLLM.
(base) ➜ ~ vLLM -v INFO 03-16 22:15:08 [__init__.py:256] Automatically detected platform cpu. 0.7.4.dev483+gd1ad2a57
I don't think I need to rely on nvidia to run the model through the cpu, and the documentation on cpu booting doesn't mention any dependencies on NVIDIA. So do I have to install the triton module?
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
excitoon
Metadata
Metadata
Assignees
Labels
usageHow to use vllmHow to use vllm