Skip to content

Installing with ROCM #621

Closed
Closed
@baderex

Description

@baderex

Hello,

I'm trying to install VLLM on AMD server. However unable to build the package because CUDA is not installed. Is their anyway we can configure it to work with ROCM instead?

!pip install vllm

Error:
RuntimeError: Cannot find CUDA_HOME. CUDA must be available in order to build the package.

ROCM is installed and verified

PyTorch 2-0-ROCm

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions