-
Notifications
You must be signed in to change notification settings - Fork 434
[Feature] kvcache cuda kernel #1247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @DwyaneShi, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request significantly enhances the AIBrix KV cache framework by integrating a high-performance CUDA kernel for key-value cache operations. It establishes a robust C++/CUDA build pipeline and refactors dependency management, laying the groundwork for efficient GPU-accelerated memory handling. The changes enable seamless interaction between Python and the native CUDA kernels, complete with comprehensive testing to ensure reliability.
Highlights
- CUDA Kernel Migration: The core AIBrix KV cache CUDA kernel (
cache_kernels.cu
) has been migrated from the vLLM repository, enabling GPU-accelerated KV cache operations within AIBrix. - C++/CUDA Build System Integration: A comprehensive C++/CUDA build system, adapted from vLLM, has been integrated using CMake and
setup.py
. This facilitates the compilation and packaging of native extensions. - Dependency Management Overhaul: The project's dependency management has shifted from
poetry
tosetuptools
andpip
-basedrequirements.txt
files, streamlining the build and installation process for C++ and CUDA extensions. - Python Bindings for KV Cache Operations: New Python modules (
_custom_ops.py
) and C++ bindings (torch_bindings.cpp
) have been added to expose thereshape_and_cache_multi_layer
andreshape_and_offload_multi_layer
CUDA kernels to Python, allowing for direct invocation from PyTorch. - Comprehensive Testing for Kernels: New Pytest-based tests (
test_cache_ops.py
) have been introduced to validate the functionality and correctness of thereshape_and_cache_multi_layer
andreshape_and_offload_multi_layer
kernels across various configurations and data types.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
The pull request migrates a CUDA kernel for KV cache operations from vLLM and establishes a new build system based on CMake and setuptools. The changes include new C++/CUDA source files, CMake build scripts, and a transition in dependency management from Poetry to pip with requirements files. The core logic appears to be a direct port from vLLM. The integration, code quality, and testing aspects of these changes have been reviewed, identifying a high-severity issue concerning disabled tests and several medium-severity issues related to code consistency and maintainability.
from vllm.platforms import current_platform | ||
from vllm.utils import get_kv_cache_torch_dtype | ||
|
||
pytest.skip(allow_module_level=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The entire test module is disabled due to pytest.skip(allow_module_level=True)
. This prevents the new CUDA kernels for reshape_and_cache
and reshape_and_offload
from being tested, which is a significant gap in test coverage for this new core feature. Please remove this line to ensure the kernels are properly validated.
d857214
to
8635799
Compare
Signed-off-by: Haiyang Shi <haiyang.shi@bytedance.com>
@@ -84,7 +84,8 @@ jobs: | |||
popd | |||
|
|||
pushd python/aibrix_kvcache | |||
poetry publish --build | |||
pip install -r requirements/build.txt -r requirements/dev.txt -r requirements/core.txt | |||
python -m build --sdist --wheel --no-isolation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
two issues here
- seems requirements are not well organized? for build purpose, should it be a single top level one? rest files could be embedded in the txt file
-r core.txt
-r xxx.txt
pytest
- seems the behavior has been changed, we used to use
peotry
to publish package to pypi. now, it only have the build process, do you still want to automate the publish process?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
two issues here
- seems requirements are not well organized? for build purpose, should it be a single top level one? rest files could be embedded in the txt file
-r core.txt -r xxx.txt pytest
Could be enhanced. I just adapted and simplified the requirement txt files in vllm, I will reoragnize it with another commit.
- seems the behavior has been changed, we used to use
peotry
to publish package to pypi. now, it only have the build process, do you still want to automate the publish process?
Right now the build command will not compile the cuda kernel on github env since it cannot detect a valid cuda env, will figure out how to build cuda kernels with github workflow and re-enable the automatic publish process.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sounds good
# requirements.txt files and should be kept consistent. The ROCm torch | ||
# versions are derived from docker/Dockerfile.rocm | ||
# | ||
set(TORCH_SUPPORTED_VERSION_CUDA "2.7.0") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this version comes from the pinned vLLM dockerfile?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, almost all the versions come from vllm
void reshape_and_cache_multi_layer( | ||
const std::vector<torch::Tensor> &offload_kv_cache_blocks, | ||
const std::vector<torch::Tensor> &kv_caches, torch::Tensor &slot_mapping, | ||
const int64_t block_size, const std::string &kv_cache_dtype, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we had a short discussion yesterday on the block size api level support. technically, caller can use customized value instead of exact same one used by engine?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the block_size
parameter is the block size used by the engine side, the offload_kv_cache_block_size
derived from offload_kv_cache_blocks
parameter is the block size used on our kvcache side, they could be different theoretically, will have an e2e testing it.
const int64_t offload_kv_cache_block_size =
(layout == aibrix::KVCacheOffloadLayout::kLCND)
? offload_kv_cache_block_shape[2]
: offload_kv_cache_block_shape[0];
const int64_t offload_kv_cache_num_layers =
(layout == aibrix::KVCacheOffloadLayout::kLCND)
? offload_kv_cache_block_shape[0]
: offload_kv_cache_block_shape[2];
@@ -0,0 +1,672 @@ | |||
#pragma once |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is this one from our own or other repo?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the quantization is to adapt engine's quantization like AWQ etc to make sure the kernel still works?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
only the cache_kernels.cu
is from our own, other files are ported from vllm to support onload/offload w/ quantization
const float *v_scale = v_scales[layer_idx]; | ||
|
||
// Copy data between kv_cache and offload_kv_cache | ||
for (int i = tid; i < embed_dim; i += num_threads) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's the common embed_dim value here? if the value is not aligned with num_threads. will be any performance issues? just curious
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1024/tp_size for llama 3 models. the value should be aligned with num_threads (which is max{128, a num derived from embed_dim}).
|
||
const int64_t kv_cache_num_blocks = kv_cache_shape[1]; | ||
|
||
torch::Tensor offload_kv_cache_ptrs = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems get_device_ptrs
is invoked multiple times. every time is create new std::vector
. do you think using global tensor help improve the efficiency? do we need performance issues for long sequencE?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
right now a sequence will be chunked by 512, and every 512-token chunk will invoke the kernel once. typical latency of creating a 512-integer vector in cpp is < 1us, very negligible if compared with ms-level kernel execution time
I am not an expert in this area, try my best to help review this change. overall looks good to me |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Signed-off-by: Haiyang Shi <haiyang.shi@bytedance.com> Co-authored-by: Haiyang Shi <haiyang.shi@bytedance.com>
Pull Request Description
Note:
cache_kernels.cu
includes the AIBrix cuda kernel while other c++ files are ported from vLLMRelated Issues
Resolves: #[Insert issue number(s)]
Important: Before submitting, please complete the description above and review the checklist below.
Contribution Guidelines (Expand for Details)
We appreciate your contribution to aibrix! To ensure a smooth review process and maintain high code quality, please adhere to the following guidelines:
Pull Request Title Format
Your PR title should start with one of these prefixes to indicate the nature of the change:
[Bug]
: Corrections to existing functionality[CI]
: Changes to build process or CI pipeline[Docs]
: Updates or additions to documentation[API]
: Modifications to aibrix's API or interface[CLI]
: Changes or additions to the Command Line Interface[Misc]
: For changes not covered above (use sparingly)Note: For changes spanning multiple categories, use multiple prefixes in order of importance.
Submission Checklist
By submitting this PR, you confirm that you've read these guidelines and your changes align with the project's contribution standards.