Skip to content

[V1] Fix Compilation config & Enable CUDA graph by default #10528

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
Nov 21, 2024

Conversation

WoosukKwon
Copy link
Collaborator

@WoosukKwon WoosukKwon commented Nov 21, 2024

This PR fixes a performance bug on V1 introduced by #10437, which disabled custom ops even when torch.compile was not used.

Also, this PR enables the piecewise CUDA graphs by default since #10237 was merged.

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@simon-mo simon-mo enabled auto-merge (squash) November 21, 2024 17:03
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Nov 21, 2024
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
@WoosukKwon WoosukKwon disabled auto-merge November 21, 2024 20:53
@WoosukKwon WoosukKwon merged commit f9310cb into main Nov 21, 2024
37 of 39 checks passed
@WoosukKwon WoosukKwon deleted the v1-compile-config branch November 21, 2024 20:53
sleepwalker2017 pushed a commit to sleepwalker2017/vllm that referenced this pull request Dec 13, 2024
…ect#10528)

Signed-off-by: Woosuk Kwon <woosuk.kwon@berkeley.edu>
anko-intel pushed a commit to HabanaAI/vllm-fork that referenced this pull request Feb 10, 2025

_, total_gpu_memory = torch.cuda.mem_get_info()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason why this doesn't just use total_gpu_memory from after the profile run (like it was done before)?

@mergify mergify bot added the v1 label May 30, 2025
torch.cuda.empty_cache()
torch_allocated_bytes = torch.cuda.memory_stats(
)["allocated_bytes.all.current"]
total_allocated_bytes = torch.cuda.mem_get_info(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is too pessimistic, if anything else is allocated on the GPU (like when we want to do 2 llm instances in tests), this will count all that memory as if it was allocated in the forward pass. I think we should instead just subtract 2 values here: #18974

@laithsakka
Copy link

laithsakka commented Jul 15, 2025

This PR override the compilation mode that the user provides is that expected?
Shall we warn at least ?
for example
If i am doing something like

conf = vllm.config.CompilationConfig(level=CompilationLevel.NO_COMPILATION )
llm = LLM(model="meta-llama/Llama-3.2-1B",compilation_config=conf)

then level will be overriden by this code when VLLM_USE_V1 is on. and go back to become PIECEWISE

  if envs.VLLM_USE_V1 and self.model_config is not None and \
            not self.model_config.enforce_eager:
            # By default, V1 uses piecewise CUDA graphs. If full_cuda_graph
            # is set to True, full CUDA graphs will be used.
            self.compilation_config.cudagraph_num_of_warmups = 1
            self.compilation_config.level = CompilationLevel.PIECEWISE
            self.compilation_config.set_splitting_ops_for_v1()

@ProExpertProg
Copy link
Collaborator

ProExpertProg commented Jul 15, 2025

Yes, I believe #19340 is supposed to address this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants