Skip to content

Conversation

@cyang49
Copy link
Contributor

@cyang49 cyang49 commented Jul 23, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

  1. Refactor mamba2 kernels and callers to simplify the code assuming input tensors are always varlen with a shape like (seqlen, hiddendim) with metadata cu_seqlen for determining number of tokens of individual requests in the batch, instead of (batch, seqlen, hiddendim) where some of the requests need to be padded.
  2. Refactor conv1d metadata computation - now it's done the same time when preparing other metadata, instead of in the first layer forward. Also cleans up the code a bit.

Test Plan

  1. e2e test with lm_eval gsm8k task should show comparable results. (In both v0 and v1)
  2. Pass unit tests under tests/kernels/mamba in CI/CD
  3. lm_eval other affected models - e.g. plamo2

Test Result

Commands

V1

VLLM_ATTENTION_BACKEND=FLASHINFER VLLM_USE_V1=1 lm_eval --model vllm  --model_args pretrained=ibm-granite/granite-4.0-tiny-preview,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.95,enable_prefix_caching=False --batch_size auto --trust_remote_code  --cache_requests true --tasks gsm8k

V0

lm_eval --model vllm  --model_args pretrained=ibm-granite/granite-4.0-tiny-preview --batch_size auto --trust_remote_code  --cache_requests true --tasks gsm8k

Main (316b1bf)

V1

vllm (pretrained=ibm-granite/granite-4.0-tiny-preview,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.95,enable_prefix_caching=False,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.6058|±  |0.0135|
|     |       |strict-match    |     5|exact_match|↑  |0.5838|±  |0.0136|

V0

vllm (pretrained=ibm-granite/granite-4.0-tiny-preview,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.6042|±  |0.0135|
|     |       |strict-match    |     5|exact_match|↑  |0.5845|±  |0.0136|

This PR

V1

vllm (pretrained=ibm-granite/granite-4.0-tiny-preview,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.95,enable_prefix_caching=False,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.6058|±  |0.0135|
|     |       |strict-match    |     5|exact_match|↑  |0.5838|±  |0.0136|

V0

vllm (pretrained=ibm-granite/granite-4.0-tiny-preview,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.6005|±  |0.0135|
|     |       |strict-match    |     5|exact_match|↑  |0.5785|±  |0.0136|

Plamo2 lm_eval

lm_eval --model vllm  --model_args pretrained=pfnet/plamo-2.1-2b-cpt,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.95,max_model_len=4096 --batch_size auto --trust_remote_code  --cache_requests true --tasks gsm8k
vllm (pretrained=pfnet/plamo-2.1-2b-cpt,tensor_parallel_size=1,dtype=auto,gpu_memory_utilization=0.95,max_model_len=4096,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: None, batch_size: auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.5906|±  |0.0135|
|     |       |strict-match    |     5|exact_match|↑  |0.5861|±  |0.0136|

(Optional) Documentation Update

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the Mamba2 kernels and their callers to natively support variable-length inputs, removing the explicit batch dimension from tensors and relying on cu_seqlens for sequence boundaries. This is a significant and beneficial simplification that should improve performance by avoiding padding. The changes are systematic and consistent across all modified files. I've found one potential compatibility issue with older Triton versions that should be addressed. Otherwise, the refactoring looks solid.

@mergify mergify bot added the v1 label Jul 23, 2025
@cyang49 cyang49 changed the title [Model] mamba2 varlen refactor [Model] Mamba2 varlen and metadata refactor Jul 24, 2025
@cyang49 cyang49 marked this pull request as ready for review July 24, 2025 19:12
@cyang49 cyang49 force-pushed the varlen_refactor branch 2 times, most recently from 7392a5a to 9af8a86 Compare July 28, 2025 13:51
@mergify
Copy link

mergify bot commented Jul 30, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @cyang49.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

Copy link
Member

@tlrmchlsmth tlrmchlsmth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please make sure this won't beak other mamba models? It looks like it would break plamo2.py and possibly others

@cyang49
Copy link
Contributor Author

cyang49 commented Jul 30, 2025

Could you please make sure this won't beak other mamba models? It looks like it would break plamo2.py and possibly others

Is there a test I can run? Or it's just grepping the keywords in the repo

@mergify
Copy link

mergify bot commented Aug 2, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @cyang49.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@tdoublep
Copy link
Member

@cyang49 I will review these changes carefully early next week. After going through these kernels a lot the last days, I now understand how useful this refactor is. Right now, the kernels support many cases that we never use in vLLM and it makes them very difficult to read + maintain.

Copy link
Member

@tdoublep tdoublep left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a significant improvement to the maintainability of the code. I have some small suggestions but would like to merge this one asap before other things. Please let me know if you need any help resolving the conflicts.

@cyang49
Copy link
Contributor Author

cyang49 commented Sep 17, 2025

@tomeras91 could you have a look at the interface change to mamba_chunk_scan_combined_varlen and the changes I made to test_mamba_ssm_ssd.py ?

@mergify
Copy link

mergify bot commented Sep 19, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @cyang49.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Sep 19, 2025
@mergify mergify bot removed the needs-rebase label Sep 19, 2025
@cyang49 cyang49 force-pushed the varlen_refactor branch 3 times, most recently from b4f198c to 56bf99d Compare September 24, 2025 16:03
cyang49 and others added 3 commits September 24, 2025 13:17
- mamba2 varlen refactor
- refactoring and reduce redundant query_start_loc_p computation in v1
- conv1d metadata refactoring
- use int64 strides except for the least significant dim
- fix query_start_loc_p affected by metadata refactor

Signed-off-by: Chih-Chieh-Yang <7364402+cyang49@users.noreply.github.com>
merging

Signed-off-by: Chih-Chieh-Yang <7364402+cyang49@users.noreply.github.com>
Signed-off-by: Chih-Chieh-Yang <7364402+cyang49@users.noreply.github.com>
Copy link
Member

@tdoublep tdoublep left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - thank you for the effort to improve the code and your patience with the multiple rebases.

@tdoublep tdoublep enabled auto-merge (squash) September 26, 2025 08:11
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 26, 2025
@tdoublep tdoublep merged commit 2b6b1d7 into vllm-project:main Sep 26, 2025
62 checks passed
@cyang49 cyang49 deleted the varlen_refactor branch September 30, 2025 14:09
pdasigi pushed a commit to pdasigi/vllm that referenced this pull request Oct 2, 2025
Signed-off-by: Chih-Chieh-Yang <7364402+cyang49@users.noreply.github.com>
Co-authored-by: RishiAstra <40644327+RishiAstra@users.noreply.github.com>
yewentao256 pushed a commit that referenced this pull request Oct 3, 2025
Signed-off-by: Chih-Chieh-Yang <7364402+cyang49@users.noreply.github.com>
Co-authored-by: RishiAstra <40644327+RishiAstra@users.noreply.github.com>
Signed-off-by: yewentao256 <zhyanwentao@126.com>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 10, 2025
Signed-off-by: Chih-Chieh-Yang <7364402+cyang49@users.noreply.github.com>
Co-authored-by: RishiAstra <40644327+RishiAstra@users.noreply.github.com>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
choprahetarth pushed a commit to Tandemn-Labs/vllm that referenced this pull request Oct 11, 2025
Signed-off-by: Chih-Chieh-Yang <7364402+cyang49@users.noreply.github.com>
Co-authored-by: RishiAstra <40644327+RishiAstra@users.noreply.github.com>
lywa1998 pushed a commit to lywa1998/vllm that referenced this pull request Oct 20, 2025
Signed-off-by: Chih-Chieh-Yang <7364402+cyang49@users.noreply.github.com>
Co-authored-by: RishiAstra <40644327+RishiAstra@users.noreply.github.com>
alhridoy pushed a commit to alhridoy/vllm that referenced this pull request Oct 24, 2025
Signed-off-by: Chih-Chieh-Yang <7364402+cyang49@users.noreply.github.com>
Co-authored-by: RishiAstra <40644327+RishiAstra@users.noreply.github.com>
xuebwang-amd pushed a commit to xuebwang-amd/vllm that referenced this pull request Oct 24, 2025
Signed-off-by: Chih-Chieh-Yang <7364402+cyang49@users.noreply.github.com>
Co-authored-by: RishiAstra <40644327+RishiAstra@users.noreply.github.com>
Signed-off-by: xuebwang-amd <xuebwang@amd.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed v1

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants