Skip to content

Disable remote caching when calling compile_fx #16611

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Apr 16, 2025

Conversation

zou3519
Copy link
Collaborator

@zou3519 zou3519 commented Apr 14, 2025

The problem is as follows:

  • vLLM requires its monkeypatched functions to run (e.g.
    def hijack_compiled_fx_graph_hash(*args, **kwargs):
    )
  • These functions may not run if (1) a user has torch.compile remote cache set up and (2) there is a remote cache hit.
  • When the monkeypatched/hijacked functions fail to run, we get some assertions:
    assert hash_str is not None, (
    "failed to get the hash of the compiled graph")
    assert file_path is not None, (
    "failed to get the file path of the compiled graph")

This PR disables torch.compile remote caching for vLLM compile.

Test Plan:

  • tested locally with vllm serve "meta-llama/Llama-4-Scout-17B-16E-Instruct" -tp 8 --max_ model_len 1000 --override-generation-config='{"attn_temperature_tuning": true}'

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Collaborator

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix, let me play around locally.

The problem is at follows:
- vLLM requires its monkeypatched functions to run (e.g.
  https://github.com/vllm-project/vllm/blob/7b5ecf79bd94aab0d782c70126d0dcc37c16bc60/vllm/compilation/compiler_interface.py#L251)
- These functions may not run if (1) a user has torch.compile remote
  cache setup and (2) there is a remote cache hit.
- When the monkeypatched/hijacked functions fail to run, we get some
  assertions:
  https://github.com/vllm-project/vllm/blob/7b5ecf79bd94aab0d782c70126d0dcc37c16bc60/vllm/compilation/compiler_interface.py#L299-L302

This PR disables torch.compile remote caching for vLLM compile.

Test Plan:
- tested locally

Signed-off-by: rzou <zou3519@gmail.com>
@zou3519
Copy link
Collaborator Author

zou3519 commented Apr 16, 2025

This PR is mostly to fix a meta-internal bug I noticed (where we do have the torch.compile remote caches on), but I think this is generally applicable to vLLM so we should ship it

@zou3519 zou3519 marked this pull request as ready for review April 16, 2025 02:16
Copy link
Collaborator

@houseroad houseroad left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good.

@houseroad houseroad added the ready ONLY add when PR is ready to merge/full CI is needed label Apr 16, 2025
@houseroad houseroad merged commit 966c742 into vllm-project:main Apr 16, 2025
58 checks passed
lionelvillard pushed a commit to lionelvillard/vllm that referenced this pull request Apr 17, 2025
yangw-dev pushed a commit to yangw-dev/vllm that referenced this pull request Apr 21, 2025
Signed-off-by: rzou <zou3519@gmail.com>
Signed-off-by: Yang Wang <elainewy@meta.com>
jikunshang pushed a commit to jikunshang/vllm that referenced this pull request Apr 29, 2025
lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Apr 29, 2025
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
Signed-off-by: rzou <zou3519@gmail.com>
Signed-off-by: Mu Huai <tianbowen.tbw@antgroup.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants