Skip to content

[WIP][Feature] Support chunked prefill when using Deepseek MTP model as draft model #15153

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

pyc96
Copy link
Contributor

@pyc96 pyc96 commented Mar 19, 2025

Deepseek MTP currently is not compatible with chunked prefill. When enabling both features, a runtime crash due to shape mismatch will happen when receiving certain number of concurrent requests and triggering chunked prefill.

python -m vllm.entrypoints.openai.api_server --gpu-memory-utilization 0.8  --max-model-len 65536 --max-num-seqs 128 --seed 0 --tensor-parallel-size 8 --model deepseek-ai/DeepSeek-R1 --trust-remote-code --num-speculative-tokens 3  --enable-chunked-prefill

***Sending concurrent requests..***

(VllmWorkerProcess pid=655093) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=655091) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]   File "/workspace/vllm-oss/vllm/spec_decode/spec_decode_worker.py", line 755, in _run_non_driver_rank
(VllmWorkerProcess pid=655089) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]   File "/workspace/vllm-oss/vllm/worker/worker_base.py", line 171, in execute_model
(VllmWorkerProcess pid=655094) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]   File "/workspace/vllm-oss/vllm/worker/model_runner.py", line 1742, in execute_model
(VllmWorkerProcess pid=655092) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]     return func(*args, **kwargs)
(VllmWorkerProcess pid=655090) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]   File "/workspace/vllm-oss/vllm/spec_decode/spec_decode_worker.py", line 755, in _run_non_driver_rank
(VllmWorkerProcess pid=655088) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=655093) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]   File "/workspace/vllm-oss/vllm/spec_decode/spec_decode_worker.py", line 755, in _run_non_driver_rank
(VllmWorkerProcess pid=655091) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]     self.proposer_worker.execute_model()
(VllmWorkerProcess pid=655089) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]     return self.worker.execute_model(execute_model_req)
(VllmWorkerProcess pid=655094) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]     hidden_or_intermediate_states = model_executable(
(VllmWorkerProcess pid=655092) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]            ^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=655090) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]     self.proposer_worker.execute_model()
(VllmWorkerProcess pid=655088) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]   File "/workspace/vllm-oss/vllm/spec_decode/spec_decode_worker.py", line 755, in _run_non_driver_rank
(VllmWorkerProcess pid=655093) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]     self.proposer_worker.execute_model()
(VllmWorkerProcess pid=655091) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]   File "/workspace/vllm-oss/vllm/worker/worker_base.py", line 171, in execute_model
INFO:     127.0.0.1:41546 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
(VllmWorkerProcess pid=655089) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=655094) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]                                     ^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=655092) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]   File "/workspace/vllm-oss/vllm/spec_decode/spec_decode_worker.py", line 551, in start_worker_execution_loop
(VllmWorkerProcess pid=655090) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]   File "/workspace/vllm-oss/vllm/worker/worker_base.py", line 171, in execute_model
(VllmWorkerProcess pid=655088) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]     self.proposer_worker.execute_model()
(VllmWorkerProcess pid=655093) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]   File "/workspace/vllm-oss/vllm/worker/worker_base.py", line 171, in execute_model
(VllmWorkerProcess pid=655091) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]     return self.worker.execute_model(execute_model_req)
(VllmWorkerProcess pid=655089) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]   File "/workspace/vllm-oss/vllm/worker/worker_base.py", line 420, in execute_model
(VllmWorkerProcess pid=655094) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]   File "/workspace/vllm-env/lib/python3.12/site-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl
(VllmWorkerProcess pid=655092) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]     while self._run_non_driver_rank():
(VllmWorkerProcess pid=655090) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]     return self.worker.execute_model(execute_model_req)
(VllmWorkerProcess pid=655092) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=655088) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]   File "/workspace/vllm-oss/vllm/worker/worker_base.py", line 171, in execute_model
(VllmWorkerProcess pid=655093) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]     return self.worker.execute_model(execute_model_req)
(VllmWorkerProcess pid=655091) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(VllmWorkerProcess pid=655089) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]     output = self.model_runner.execute_model(
(VllmWorkerProcess pid=655094) ERROR 03-19 19:13:19 [multiproc_worker_utils.py:242]     return self._call_impl(*args, **kwargs)
ERROR 03-19 19:13:19 [engine.py:141] RuntimeError('Tensors must have same number of dimensions: got 2 and 3')

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@pyc96 pyc96 changed the title Support chunked prefill when using Deepseek MTP model as draft model [Feature] Support chunked prefill when using Deepseek MTP model as draft model Mar 19, 2025
Signed-off-by: pyc96 <pychen96@gmail.com>
@pyc96 pyc96 marked this pull request as ready for review March 20, 2025 22:04
@pyc96 pyc96 marked this pull request as draft March 20, 2025 22:52
@pyc96 pyc96 changed the title [Feature] Support chunked prefill when using Deepseek MTP model as draft model [WIP][Feature] Support chunked prefill when using Deepseek MTP model as draft model Mar 21, 2025
luyuzhe111 and others added 4 commits March 21, 2025 18:22
Signed-off-by: Bryan Lu <yuzhelu@amazon.com>
Signed-off-by: Bryan Lu <yuzhelu@amazon.com>
Signed-off-by: pyc96 <pychen96@gmail.com>
@pyc96
Copy link
Contributor Author

pyc96 commented Mar 21, 2025

Just found out #14922 completed most of the work and can be directly reused for MTP. Simply allow both drafter model types 'eagle' and 'deepseek_mtp' to use/update prefill_hidden_states can fix the issue

@ayrnb
Copy link

ayrnb commented Apr 21, 2025

Has it been supported?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants