[Usage]: Can and How we start server on multi-node multi-gpu with torchrun? #8021
Closed
1 task done
Labels
usage
How to use vllm
Your current environment
How would you like to use vllm
I want to start distributed inference server on multi-node multi-gpu with torchrun.
However I cannot get it work.
Can someone help me? Thanks a lot.
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: