Skip to content

Conversation

@rkooo567
Copy link
Collaborator

@rkooo567 rkooo567 commented Apr 29, 2024

When refactoring the scheduler, there was the wrong assumption that num_new_tokens are always 1. It was detected and fixed in schedule_running, but it wasn't properly tested for swapping case, so it wasn't deleted.

This PR fixes the issue by removing the assertion. This PR also allows for scheduler to inject artificial preemption, so that we can do e2e testing for swapping and preemtpion. (I decided to add artificial injection code instead of calculating kv cache size to be intentionally small cuz I thought it is more prone to regression)

Fixes #4422


PR Checklist (Click to Expand)

Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.

PR Title and Classification

Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:

  • [Bugfix] for bug fixes.
  • [CI/Build] for build or continuous integration improvements.
  • [Doc] for documentation fixes and improvements.
  • [Model] for adding a new model or improving an existing model. Model name should appear in the title.
  • [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.)
  • [Kernel] for changes affecting CUDA kernels or other compute kernels.
  • [Core] for changes in the core vLLM logic (e.g., LLMEngine, AsyncLLMEngine, Scheduler, etc.)
  • [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD]).
  • [Misc] for PRs that do not fit the above categories. Please use this sparingly.

Note: If the PR spans more than one category, please include all relevant prefixes.

Code Quality

The PR need to meet the following code quality standards:

  • We adhere to Google Python style guide and Google C++ style guide.
  • Pass all linter checks. Please use format.sh to format your code.
  • The code need to be well-documented to ensure future contributors can easily understand the code.
  • Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests.
  • Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.

Notes for Large Changes

Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR.

What to Expect for the Reviews

The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:

  • After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability.
  • After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team.
  • After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.
  • Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion.

Thank You

Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!

proposal_lens=proposal_lens)

exception_secret = 'artifical stop'
exception_secret = 'artificial stop'
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unrelated, but I found this anyway, so just fixing it here

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo killer! 🎯


# The following field is test-only. It is used to inject artificial
# preemption.
self.enable_artificial_preemption = ENABLE_ARTIFICIAL_PREEMPT
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

could be premature optimization, but usuallly accessing global var is more expensive than class attribute, so I just moved it here

@rkooo567 rkooo567 changed the title [Bug fix][Core] assert num_new_tokens == 1 fails when SamplingParams.n is not 1 and max_tokens is large. [Bug fix][Core] assert num_new_tokens == 1 fails when SamplingParams.n is not 1 and max_tokens is large & Add tests for preemption Apr 29, 2024
Comment on lines +17 to +20
assert ENABLE_ARTIFICIAL_PREEMPT is True, (
"Use an env var VLLM_TEST_ENABLE_ARTIFICIAL_PREEMPT=1. "
"`VLLM_TEST_ENABLE_ARTIFICIAL_PREEMPT=1 pytest "
"tests/basic_correctness/test_preemption.py`")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I created num_gpu_blocks_override for spec decode preemption tests and it works pretty well!

@pytest.mark.parametrize(
"common_llm_kwargs",
[{
"block_size": 8,
# 2 for small prompt, 256//8 for generated.
"num_gpu_blocks_override": 2 + 256 // 8,
"max_model_len": (2 + 256 // 8) * 8,
# Skip cuda graph recording for fast test.
"enforce_eager": True,
# Required for spec decode.
"use_v2_block_manager": True
}])
@pytest.mark.parametrize("per_test_common_llm_kwargs", [
{
"model": "JackFram/llama-160m",
},
])
@pytest.mark.parametrize("baseline_llm_kwargs", [{}])
@pytest.mark.parametrize("test_llm_kwargs", [
{
"speculative_model": "JackFram/llama-68m",
"num_speculative_tokens": 5,
},
])
@pytest.mark.parametrize(
"output_len",
[
# Use small output len for fast test.
256,
])
@pytest.mark.parametrize("batch_size", [4])
@pytest.mark.parametrize("seed", [1])
def test_spec_decode_e2e_greedy_correctness_with_preemption(
baseline_llm_generator, test_llm_generator, batch_size: int,
output_len: int):
"""Verify greedy equality, even when some sequences are preempted mid-
generation.
"""
run_greedy_equality_correctness_test(baseline_llm_generator,
test_llm_generator,
batch_size,
max_output_len=output_len,
force_output_len=True)

Copy link
Collaborator Author

@rkooo567 rkooo567 Apr 30, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How is 2 + 256 // 8, decided? one concern is that it could be a little prone to code change (e.g., example prompt gets longer). So, I think ideally we should calculate this programmatically.

Or alternatively, is there a way to verify if preemption happens from the engine now? (if so we can just add assert so that when things are changed, it will be caught at least)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, followed the suggestion and added test-only var to scheduler to verify if preemption happens

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The approach you went with is good. Technically you don't need to check that preemption happened, because we construct the number of blocks such that preemption must occur. E.g. if each sequence generates 256 tokens (ignore_eos=True), and the block size is 8, only bs=1 can complete with 32 blocks. If we set bs=2+, then as both sequences grow there is guaranteed to be a preemption.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may want to add a comment explaining this as I did for the block manager preemption test. I should have added it for spec decode as well..

"""Verify block manager v2 produces same outputs as block manager v1, even
when there is preemption.
This constructs two LLM, each with limited number of GPU blocks. The limit
is decided such that as the sequences in the batch grow, sequences must be
preempted and removed from cache.
If the output token ids are equivalent, then we have confidence that the KV
cache is not corrupted in the v2 block manager.
NOTE: We want a significant number of generated tokens so that any incorrect
KV mapping has time to build up error.
"""

proposal_lens=proposal_lens)

exception_secret = 'artifical stop'
exception_secret = 'artificial stop'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo killer! 🎯

# If chunked prefill is preempted, it is possible chunked prefill
# comes before decoding. In that case, it is possible there's no
# budget for decoding which returns num_running_tokens == 0.
if num_running_tokens == 0:
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can technically avoid if we always sort waiting queue by FCFS, but we are not doing this because of performance now.

It can happen in this scenario.

step 1
waiting [1, 2, 3]
running []

step 2
waiting []
running [1, 2, 3]

step 3 (all reqs preempted)
waiting [3, 2, 1] # NOTE: we don't sort waiting for perf. preempt is appended to left. 
running []

step 4
waiting [1]
running [2 3] # NOTE: running is always sorted by fcfs

step 5 (1 is scheduled)
waiting []
running [1, 2, 3] ===> 1 is a chunked prefill because it is just scheduled from waiting, and it can consume all token budgets

@rkooo567
Copy link
Collaborator Author

rkooo567 commented Apr 30, 2024

@cadedaniel for some reasons swapping test doesn't trigger preemption. I will fix it and ping you (and add comments for the gpu block size as you suggested)

@rkooo567
Copy link
Collaborator Author

rkooo567 commented May 1, 2024

@cadedaniel new approaach was a little difficult to make it work with swap test, so I just reverted to the original approach. I will lyk when it is ready to be merged

@rkooo567 rkooo567 mentioned this pull request May 1, 2024
Copy link
Collaborator

@cadedaniel cadedaniel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what's up with the swap test? happy to meet to get to the bottom of it

Comment on lines -452 to -453
# Make sure all queues are updated.
assert len(running_queue) == 0
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this assert important? sorry I missed this earlier

Copy link
Collaborator Author

@rkooo567 rkooo567 May 2, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This means len(running_queue) != num_seqs sometimes. I think it is harmless (and it only happens upon preemption + chunked prefill). running queue is anyway sorted by FCFS. There's a way to keep this assert, but that will affect the performance a bit

@cadedaniel
Copy link
Collaborator

what's up with the swap test? happy to meet to get to the bottom of it

Discussed offline -- seems there's some more issues with beam search and swap. we can follow up later.

@cadedaniel cadedaniel merged commit 0d62fe5 into vllm-project:main May 2, 2024
robertgshaw2-redhat pushed a commit to neuralmagic/nm-vllm that referenced this pull request May 6, 2024
…n is not 1 and max_tokens is large & Add tests for preemption (vllm-project#4451)
z103cb pushed a commit to z103cb/opendatahub_vllm that referenced this pull request May 7, 2024
…n is not 1 and max_tokens is large & Add tests for preemption (vllm-project#4451)
dtrifiro pushed a commit to opendatahub-io/vllm that referenced this pull request May 7, 2024
…n is not 1 and max_tokens is large & Add tests for preemption (vllm-project#4451)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: assert num_new_tokens == 1 fails when SamplingParams.n is not 1 and max_tokens is large.

3 participants