Closed
Description
Hello,
I'm trying to install VLLM on AMD server. However unable to build the package because CUDA is not installed. Is their anyway we can configure it to work with ROCM instead?
!pip install vllm
Error:
RuntimeError: Cannot find CUDA_HOME. CUDA must be available in order to build the package.
ROCM is installed and verified
PyTorch 2-0-ROCm