-
-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[CI/Build] simplify Dockerfile build for ARM64 / GH200 #11212
Conversation
Signed-off-by: drikster80 <ed.sealing@gmail.com>
Signed-off-by: drikster80 <ed.sealing@gmail.com>
Signed-off-by: drikster80 <ed.sealing@gmail.com>
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
…/causal-conv1d/mamba/flashinfer/bitsandbytes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the great effort!
cc @simon-mo if you can help set up a GH200 machine for testing.
@drikster80 I'm planning to merge this because it contains much less modules to build from source. Your commits are kept here, and thanks for your great contribution! |
Sounds great. Glad it was able to get tested and merged. |
@cennn thanks for the great work! |
…ject#11212) Signed-off-by: drikster80 <ed.sealing@gmail.com> Co-authored-by: drikster80 <ed.sealing@gmail.com>
…ject#11212) Signed-off-by: drikster80 <ed.sealing@gmail.com> Co-authored-by: drikster80 <ed.sealing@gmail.com>
…ject#11212) Signed-off-by: drikster80 <ed.sealing@gmail.com> Co-authored-by: drikster80 <ed.sealing@gmail.com>
From PR: 10499
Fix Issue: 2021
This contribution focuses on simplifying the Dockerfile build process for ARM64 systems. Unnecessary build from source has been removed and requirements handling has been optimized to ensure the correct installation of torch and bitsandbytes for ARM64+CUDA compatibility. The changes have been tested on the Nvidia GH200 platform with models meta-llama/Llama-3.1-8B and Qwen/Qwen2.5-0.5B-Instruct
The following command was used to build and confirmed working on Nvidia GH200:
docker build . --target vllm-openai --platform "linux/arm64" -t cenncenn/vllm-gh200-openai:v0.6.4.post1 --build-arg max_jobs=66 --build-arg nvcc_threads=2 --build-arg torch_cuda_arch_list="9.0+PTX" --build-arg vllm_fa_cmake_gpu_arches="90-real" --build-arg RUN_WHEEL_CHECK='false'