Skip to content

[VLM] Avoid unnecessary dummy multimodal data during processing #16416

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 2 commits into from
Apr 10, 2025

Conversation

DarkLight1337
Copy link
Member

@DarkLight1337 DarkLight1337 commented Apr 10, 2025

Currently with the cache enabled, in BaseMultiModalProcessor._apply_hf_processor_mm_only we create both text and multimodal dummy data but the latter is unused, leading to unnecessary overhead for allocating NumPy arrays. This is especially devastating for models that support large image sizes such as Qwen2-VL series.

This PR optimizes this process by splitting up the dummy data generation into text and multimodal parts, so that we can create just the dummy text in BaseMultiModalProcessor._apply_hf_processor_mm_only.

Note: Multi-modal processors will be required to implement get_dummy_text and get_dummy_mm_data in a future release. This PR adds a warning asking the developer to do so if the processor hasn't done that yet.

Benchmark (#11196):

python benchmarks/mmmu_bench.py --model Qwen/Qwen2.5-VL-3B-Instruct --max-model-len 16384 --max-num-seqs 64 --num-prompts 250

Main branch

# Trial 1
Request throughput: 0.36 req/s
Total generated tokens: 1149
Token generation rate: 1.66 tok/s

# Trial 2
Request throughput: 0.51 req/s
Total generated tokens: 1151
Token generation rate: 2.33 tok/s

# Trial 3
Request throughput: 0.63 req/s
Total generated tokens: 1180
Token generation rate: 3.00 tok/s

This branch

# Trial 1
Request throughput: 0.54 req/s
Total generated tokens: 1173
Token generation rate: 2.51 tok/s

# Trial 2
Request throughput: 0.54 req/s
Total generated tokens: 1138
Token generation rate: 2.44 tok/s

# Trial 3
Request throughput: 0.57 req/s
Total generated tokens: 1191
Token generation rate: 2.74 tok/s

…ocessing

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
@DarkLight1337 DarkLight1337 added the ready ONLY add when PR is ready to merge/full CI is needed label Apr 10, 2025
@DarkLight1337 DarkLight1337 requested a review from Isotr0py April 10, 2025 16:34
@DarkLight1337 DarkLight1337 requested a review from ywang96 as a code owner April 10, 2025 16:35
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@DarkLight1337 DarkLight1337 changed the title [VLM] Avoid unnecessary dummy multimodal data during processing. [VLM] Avoid unnecessary dummy multimodal data during processing Apr 10, 2025
@mergify mergify bot added the multi-modality Related to multi-modality (#4194) label Apr 10, 2025
@DarkLight1337 DarkLight1337 moved this to In Progress in Multi-modality Core Apr 10, 2025
Copy link
Collaborator

@Isotr0py Isotr0py left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, LGTM!

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) April 10, 2025 16:54
@DarkLight1337 DarkLight1337 merged commit 56d4aef into vllm-project:main Apr 10, 2025
65 checks passed
@github-project-automation github-project-automation bot moved this from In Progress to Done in Multi-modality Core Apr 10, 2025
p88h pushed a commit to p88h/vllm that referenced this pull request Apr 10, 2025
@DarkLight1337 DarkLight1337 deleted the split-dummy-data branch April 11, 2025 03:36
yangw-dev pushed a commit to yangw-dev/vllm that referenced this pull request Apr 21, 2025
…-project#16416)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Yang Wang <elainewy@meta.com>
jikunshang pushed a commit to jikunshang/vllm that referenced this pull request Apr 29, 2025
lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Apr 29, 2025
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
…-project#16416)

Signed-off-by: DarkLight1337 <tlleungac@connect.ust.hk>
Signed-off-by: Mu Huai <tianbowen.tbw@antgroup.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
multi-modality Related to multi-modality (#4194) ready ONLY add when PR is ready to merge/full CI is needed
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

2 participants