-
-
Couldn't load subscription status.
- Fork 10.9k
[Core] Use CpuGpuBuffer for block table tensors
#24795
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Signed-off-by: Nick Hill <nhill@redhat.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request refactors the BlockTable class to use the CpuGpuBuffer helper for managing tensors that exist on both CPU and GPU, specifically for block_table and slot_mapping. This change simplifies the code by centralizing the creation and management of these paired buffers, removing redundant manual handling of CPU, GPU, and NumPy tensor versions. The changes are well-contained within vllm/v1/worker/block_table.py, and the necessary adjustments in vllm/v1/worker/gpu_model_runner.py to adapt to the updated BlockTable API have been correctly applied. The refactoring improves code clarity and maintainability without introducing any apparent issues.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @WoosukKwon please confirm if this is ok to avoid conflicting with your refactoring efforts
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My PR will rewrite the block table entirely, but this change looks good for now.
Pull Request is not mergeable
Signed-off-by: Nick Hill <nhill@redhat.com>
|
The failing test is also failing on main :( example: https://buildkite.com/vllm/ci/builds/30944#01995329-1ec7-4d17-abdf-0283fa8115f5 |
vllm-project/vllm#24795 and vllm-project/vllm#24615 and vllm-project/vllm#24078 --------- Signed-off-by: Agata Dobrzyniewicz <adobrzyniewicz@habana.ai>
vllm-project/vllm#24795 and vllm-project/vllm#24615 and vllm-project/vllm#24078 --------- Signed-off-by: Agata Dobrzyniewicz <adobrzyniewicz@habana.ai> Signed-off-by: slokesha <slokeshappa@habana.ai>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com> Signed-off-by: charlifu <charlifu@amd.com>
Signed-off-by: Nick Hill <nhill@redhat.com> Signed-off-by: xuebwang-amd <xuebwang@amd.com>
Signed-off-by: Nick Hill <nhill@redhat.com>
Signed-off-by: Nick Hill <nhill@redhat.com> Signed-off-by: xuebwang-amd <xuebwang@amd.com>
In the gpu model runner input batch.