Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bugfix][CI/Build] Fix CUDA 11.8 Build #9386

Merged

Conversation

LucasWilkinson
Copy link
Contributor

@LucasWilkinson LucasWilkinson commented Oct 15, 2024

Dont build 9.0a for scaled_mm_c2x since it's outside of cuda 12.0 guard and won't help perf that much that anyways.

The issue here was that for CUDA 11.8 if we were building for 9.0 we wouldn't build scaled_mm_c3x so we would instead try to build scaled_mm_c2x for all versions, i.e. "7.5;8.0;8.6;8.9;9.0;9.0a", this is incorrect though since 9.0a isn't supported by 11.8. We can just drop 9.0a for scaled_mm_c2x since scaled_mm_c2x won't take advantage of the 9.0a features anyways.

(and fix c3x error message false reporting that there were no compatible arches when on 11.8)

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@simon-mo
Copy link
Collaborator

Thanks @LucasWilkinson, is this ready for review?

@LucasWilkinson
Copy link
Contributor Author

Thanks @LucasWilkinson, is this ready for review?

Basically just waiting for my docker build tests to finish to confirm the fix, they are slow haha

@LucasWilkinson LucasWilkinson marked this pull request as ready for review October 15, 2024 20:57
@LucasWilkinson
Copy link
Contributor Author

Confirmed, this builds (i.e. this PR)

FROM pytorch/pytorch:2.4.0-cuda11.8-cudnn9-devel AS build

ARG torch_cuda_arch_list='7.0 7.5 8.0 8.6 8.9 9.0'
ENV TORCH_CUDA_ARCH_LIST=${torch_cuda_arch_list}

RUN apt update && apt install gcc g++ git -y && apt clean && rm -rf /var/lib/apt/lists/*

ENV PATH=/workspace-lib:/workspace-lib/bin:$PATH
ENV PYTHONUSERBASE=/workspace-lib

RUN pip install git+https://github.com/neuralmagic/vllm.git@5a7b00e7a6377ca7971de3ca762583a9153f4a55 --no-cache-dir --user -v

and this fails (i.e. main):

FROM pytorch/pytorch:2.4.0-cuda11.8-cudnn9-devel AS build

ARG torch_cuda_arch_list='7.0 7.5 8.0 8.6 8.9 9.0'
ENV TORCH_CUDA_ARCH_LIST=${torch_cuda_arch_list}

RUN apt update && apt install gcc g++ git -y && apt clean && rm -rf /var/lib/apt/lists/*

ENV PATH=/workspace-lib:/workspace-lib/bin:$PATH
ENV PYTHONUSERBASE=/workspace-lib

RUN pip install git+https://github.com/vllm-project/vllm.git@e9d517f27673ec8736c026f2311d3c250d5f9061 --no-cache-dir --user -v

Copy link
Collaborator

@tlrmchlsmth tlrmchlsmth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for the fix!

@tlrmchlsmth tlrmchlsmth added the ready ONLY add when PR is ready to merge/full CI is needed label Oct 15, 2024
@tlrmchlsmth tlrmchlsmth enabled auto-merge (squash) October 15, 2024 21:05
Copy link
Collaborator

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the quick fix!

@tlrmchlsmth tlrmchlsmth merged commit 717a5f8 into vllm-project:main Oct 16, 2024
89 checks passed
charlifu pushed a commit to charlifu/vllm that referenced this pull request Oct 23, 2024
Signed-off-by: charlifu <charlifu@amd.com>
vrdn-23 pushed a commit to vrdn-23/vllm that referenced this pull request Oct 23, 2024
Signed-off-by: Vinay Damodaran <vrdn@hey.com>
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
Signed-off-by: Alvant <alvasian@yandex.ru>
garg-amit pushed a commit to garg-amit/vllm that referenced this pull request Oct 28, 2024
Signed-off-by: Amit Garg <mitgarg17495@gmail.com>
FerdinandZhong pushed a commit to FerdinandZhong/vllm that referenced this pull request Oct 29, 2024
Signed-off-by: qishuai <ferdinandzhong@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants