Skip to content

Conversation

@linzebing
Copy link
Contributor

@linzebing linzebing commented Aug 14, 2025

Purpose

Streamline slot mapping computation by replacing Torch tensor flattening and conversion with direct NumPy indexing via ravel, eliminating redundant copies and conversions

Test Plan

# vLLM Serving
VLLM_USE_V1=1 vllm serve facebook/opt-125m \
    --swap-space 16 \
    --disable-log-requests \
    --host :: \
    --dtype float16

# Capture traces
VLLM_USE_V1=1 vllm bench serve \
    --dataset-name random \
    --model facebook/opt-125m \
    --served-model-name facebook/opt-125m \
    --random-input-len 700 \
    --random-output-len 1 \
    --endpoint /v1/completions \
    --ignore-eos \
    --host localhost \
    --port 8000 \
    --num-prompts 100 \
    --profile

Also ran throughput benchmark test:

CUDA_VISIBLE_DEVICES=7 VLLM_USE_V1=1 python3 benchmarks/benchmark_throughput.py \
    --model facebook/opt-125m \
    --backend vllm \
    --input-len 800 \
    --output-len 75 \
    --num-prompts 30000

Test Result

Reduced compute_slot_mapping from 400+μs to 15-30μs.
Throughput improved 2.07% for opt-125m with input=800 and output=75

Before:
Throughput: 268.45 requests/s, 234888.85 total tokens/s, 20133.98 output tokens/s

image

After:
Throughput: 274.01 requests/s, 239750.19 total tokens/s, 20550.72 output tokens/s
image

(Optional) Documentation Update


Essential Elements of an Effective PR Description Checklist
  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the v1 label Aug 14, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a performance optimization in compute_slot_mapping by replacing PyTorch tensor operations with direct NumPy indexing. The change from self.get_cpu_tensor().flatten()[...].numpy() to self.block_table_np.ravel()[...] is more direct, avoids unnecessary function calls and tensor conversions, and leverages NumPy's efficiency for this operation. The provided benchmarks confirm the significant performance improvement. The change is correct and well-justified.

@linzebing
Copy link
Contributor Author

linzebing commented Aug 14, 2025

One-liner with significant perf improvement, @heheda12345 , @LucasWilkinson @houseroad and @njhill , can you take a look?

@linzebing linzebing changed the title [Core] direct indexing on self.block_table_np [Core] direct indexing on self.block_table_np for compute_slot_mapping Aug 14, 2025
@linzebing linzebing changed the title [Core] direct indexing on self.block_table_np for compute_slot_mapping [Core] direct indexing on self.block_table_np in compute_slot_mapping Aug 14, 2025
Copy link
Member

@njhill njhill left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wow, nice!

@njhill njhill added the ready ONLY add when PR is ready to merge/full CI is needed label Aug 14, 2025
Signed-off-by: linzebing <linzebing1995@gmail.com>
@Jialin
Copy link
Collaborator

Jialin commented Aug 15, 2025

Nice! TIL

block_table_indices = (req_indices * self.max_num_blocks_per_req +
positions // self.block_size)
block_table_cpu = self.get_cpu_tensor()
block_numbers = block_table_cpu.flatten()[block_table_indices].numpy()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC, the win is mainly coming from avoid tensor to np copy, while the tensor.flatten and np.ravel should have a similar performance?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes tensor.flatten and np.ravel are similar. Looks like the main overhead is torch tensor's indexing is slow; the numpy conversion is relatively cheap
image

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

torch tensor's indexing is slow

Interesting. Ideally, the indexing cost should be similar as well. I guess the difference might come from

  1. Actually indexing performance gap
  2. Layout difference, so torch.Tensor invoked a copy while np still creating a view.

# block_size.
block_table_indices = (req_indices * self.max_num_blocks_per_req +
positions // self.block_size)
block_table_cpu = self.get_cpu_tensor()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we could go and check other get_cpu_tensors reference to see if there's any other opportunities :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rest of the callsites are from tpu_model_runner.py, I don't spot anything obvious yet.

@njhill njhill merged commit 6e67077 into vllm-project:main Aug 15, 2025
39 checks passed
@facebook-github-bot
Copy link

@linzebing has imported this pull request. If you are a Meta employee, you can view this in D80367371.

666even666 pushed a commit to 666even666/vllm that referenced this pull request Aug 18, 2025
…vllm-project#22940)

Signed-off-by: linzebing <linzebing1995@gmail.com>
Signed-off-by: Yiwen Chen <yiwen66@berkeley.edu>
yiliu30 pushed a commit to yiliu30/vllm-fork that referenced this pull request Aug 19, 2025
divakar-amd pushed a commit to divakar-amd/vllm_upstream that referenced this pull request Aug 20, 2025
djmmoss pushed a commit to djmmoss/vllm that referenced this pull request Aug 21, 2025
…vllm-project#22940)

Signed-off-by: linzebing <linzebing1995@gmail.com>
Signed-off-by: Duncan Moss <djm.moss@gmail.com>
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
xiao-llm pushed a commit to xiao-llm/vllm that referenced this pull request Aug 28, 2025
…vllm-project#22940)

Signed-off-by: linzebing <linzebing1995@gmail.com>
Signed-off-by: Xiao Yu <xiao.yu@amd.com>
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants