-
-
Notifications
You must be signed in to change notification settings - Fork 11.8k
[Bug fix][Core] assert num_new_tokens == 1 fails when SamplingParams.n is not 1 and max_tokens is large & Add tests for preemption #4451
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
| proposal_lens=proposal_lens) | ||
|
|
||
| exception_secret = 'artifical stop' | ||
| exception_secret = 'artificial stop' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unrelated, but I found this anyway, so just fixing it here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo killer! 🎯
|
|
||
| # The following field is test-only. It is used to inject artificial | ||
| # preemption. | ||
| self.enable_artificial_preemption = ENABLE_ARTIFICIAL_PREEMPT |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
could be premature optimization, but usuallly accessing global var is more expensive than class attribute, so I just moved it here
| assert ENABLE_ARTIFICIAL_PREEMPT is True, ( | ||
| "Use an env var VLLM_TEST_ENABLE_ARTIFICIAL_PREEMPT=1. " | ||
| "`VLLM_TEST_ENABLE_ARTIFICIAL_PREEMPT=1 pytest " | ||
| "tests/basic_correctness/test_preemption.py`") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I created num_gpu_blocks_override for spec decode preemption tests and it works pretty well!
vllm/tests/spec_decode/e2e/test_correctness.py
Lines 351 to 395 in df29793
| @pytest.mark.parametrize( | |
| "common_llm_kwargs", | |
| [{ | |
| "block_size": 8, | |
| # 2 for small prompt, 256//8 for generated. | |
| "num_gpu_blocks_override": 2 + 256 // 8, | |
| "max_model_len": (2 + 256 // 8) * 8, | |
| # Skip cuda graph recording for fast test. | |
| "enforce_eager": True, | |
| # Required for spec decode. | |
| "use_v2_block_manager": True | |
| }]) | |
| @pytest.mark.parametrize("per_test_common_llm_kwargs", [ | |
| { | |
| "model": "JackFram/llama-160m", | |
| }, | |
| ]) | |
| @pytest.mark.parametrize("baseline_llm_kwargs", [{}]) | |
| @pytest.mark.parametrize("test_llm_kwargs", [ | |
| { | |
| "speculative_model": "JackFram/llama-68m", | |
| "num_speculative_tokens": 5, | |
| }, | |
| ]) | |
| @pytest.mark.parametrize( | |
| "output_len", | |
| [ | |
| # Use small output len for fast test. | |
| 256, | |
| ]) | |
| @pytest.mark.parametrize("batch_size", [4]) | |
| @pytest.mark.parametrize("seed", [1]) | |
| def test_spec_decode_e2e_greedy_correctness_with_preemption( | |
| baseline_llm_generator, test_llm_generator, batch_size: int, | |
| output_len: int): | |
| """Verify greedy equality, even when some sequences are preempted mid- | |
| generation. | |
| """ | |
| run_greedy_equality_correctness_test(baseline_llm_generator, | |
| test_llm_generator, | |
| batch_size, | |
| max_output_len=output_len, | |
| force_output_len=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is 2 + 256 // 8, decided? one concern is that it could be a little prone to code change (e.g., example prompt gets longer). So, I think ideally we should calculate this programmatically.
Or alternatively, is there a way to verify if preemption happens from the engine now? (if so we can just add assert so that when things are changed, it will be caught at least)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay, followed the suggestion and added test-only var to scheduler to verify if preemption happens
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The approach you went with is good. Technically you don't need to check that preemption happened, because we construct the number of blocks such that preemption must occur. E.g. if each sequence generates 256 tokens (ignore_eos=True), and the block size is 8, only bs=1 can complete with 32 blocks. If we set bs=2+, then as both sequences grow there is guaranteed to be a preemption.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You may want to add a comment explaining this as I did for the block manager preemption test. I should have added it for spec decode as well..
vllm/tests/core/block/e2e/test_correctness.py
Lines 30 to 42 in 4bb53e2
| """Verify block manager v2 produces same outputs as block manager v1, even | |
| when there is preemption. | |
| This constructs two LLM, each with limited number of GPU blocks. The limit | |
| is decided such that as the sequences in the batch grow, sequences must be | |
| preempted and removed from cache. | |
| If the output token ids are equivalent, then we have confidence that the KV | |
| cache is not corrupted in the v2 block manager. | |
| NOTE: We want a significant number of generated tokens so that any incorrect | |
| KV mapping has time to build up error. | |
| """ |
| proposal_lens=proposal_lens) | ||
|
|
||
| exception_secret = 'artifical stop' | ||
| exception_secret = 'artificial stop' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
typo killer! 🎯
| # If chunked prefill is preempted, it is possible chunked prefill | ||
| # comes before decoding. In that case, it is possible there's no | ||
| # budget for decoding which returns num_running_tokens == 0. | ||
| if num_running_tokens == 0: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can technically avoid if we always sort waiting queue by FCFS, but we are not doing this because of performance now.
It can happen in this scenario.
step 1
waiting [1, 2, 3]
running []
step 2
waiting []
running [1, 2, 3]
step 3 (all reqs preempted)
waiting [3, 2, 1] # NOTE: we don't sort waiting for perf. preempt is appended to left.
running []
step 4
waiting [1]
running [2 3] # NOTE: running is always sorted by fcfs
step 5 (1 is scheduled)
waiting []
running [1, 2, 3] ===> 1 is a chunked prefill because it is just scheduled from waiting, and it can consume all token budgets
|
@cadedaniel for some reasons swapping test doesn't trigger preemption. I will fix it and ping you (and add comments for the gpu block size as you suggested) |
|
@cadedaniel new approaach was a little difficult to make it work with swap test, so I just reverted to the original approach. I will lyk when it is ready to be merged |
cadedaniel
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's up with the swap test? happy to meet to get to the bottom of it
| # Make sure all queues are updated. | ||
| assert len(running_queue) == 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this assert important? sorry I missed this earlier
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This means len(running_queue) != num_seqs sometimes. I think it is harmless (and it only happens upon preemption + chunked prefill). running queue is anyway sorted by FCFS. There's a way to keep this assert, but that will affect the performance a bit
Discussed offline -- seems there's some more issues with beam search and swap. we can follow up later. |
…n is not 1 and max_tokens is large & Add tests for preemption (vllm-project#4451)
…n is not 1 and max_tokens is large & Add tests for preemption (vllm-project#4451)
…n is not 1 and max_tokens is large & Add tests for preemption (vllm-project#4451)
When refactoring the scheduler, there was the wrong assumption that num_new_tokens are always 1. It was detected and fixed in schedule_running, but it wasn't properly tested for swapping case, so it wasn't deleted.
This PR fixes the issue by removing the assertion. This PR also allows for scheduler to inject artificial preemption, so that we can do e2e testing for swapping and preemtpion. (I decided to add artificial injection code instead of calculating kv cache size to be intentionally small cuz I thought it is more prone to regression)
Fixes #4422
PR Checklist (Click to Expand)
Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]for bug fixes.[CI/Build]for build or continuous integration improvements.[Doc]for documentation fixes and improvements.[Model]for adding a new model or improving an existing model. Model name should appear in the title.[Frontend]For changes on the vLLM frontend (e.g., OpenAI API server,LLMclass, etc.)[Kernel]for changes affecting CUDA kernels or other compute kernels.[Core]for changes in the core vLLM logic (e.g.,LLMEngine,AsyncLLMEngine,Scheduler, etc.)[Hardware][Vendor]for hardware-specific changes. Vendor name should appear in the prefix (e.g.,[Hardware][AMD]).[Misc]for PRs that do not fit the above categories. Please use this sparingly.Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
format.shto format your code.docs/source/if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.Notes for Large Changes
Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with
rfc-requiredand might not go through the PR.What to Expect for the Reviews
The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:
action-requiredlabel on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.Thank You
Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!