-
Notifications
You must be signed in to change notification settings - Fork 659
[Feature] support dynamic int2 quantization cache kv #5173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
|
Thanks for your contribution! |
Codecov Report❌ Patch coverage is Additional details and impacted files@@ Coverage Diff @@
## develop #5173 +/- ##
==========================================
Coverage ? 59.71%
==========================================
Files ? 318
Lines ? 38785
Branches ? 5826
==========================================
Hits ? 23159
Misses ? 13787
Partials ? 1839
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR implements dynamic int2 quantization for KV cache to enable longer context support in LLM inference. The feature adds a new attention backend (DYNAMIC_QUANT_INT2_ATTN) that uses 2-bit quantization with dynamic scaling for key-value cache compression.
Key Changes:
- Adds
DynamciQuantInt2AttentionBackendfor int2 quantized KV cache - Implements CUDA kernels for encoder/decoder cache writing and attention computation
- Integrates with existing attention pipeline through environment variable configuration
- Includes comprehensive unit tests for the new functionality
Reviewed changes
Copilot reviewed 21 out of 21 changed files in this pull request and generated 2 comments.
Show a summary per file
| File | Description |
|---|---|
| tests/layers/test_dynamic_quant_int2_attn.py | New test file with unit tests and integration tests for int2 attention |
| fastdeploy/worker/gpu_model_runner.py | Adds int2 attention backend detection and KV cache initialization |
| fastdeploy/platforms/cuda.py | Registers DYNAMIC_QUANT_INT2_ATTN backend |
| fastdeploy/platforms/base.py | Adds DYNAMIC_QUANT_INT2_ATTN enum value |
| fastdeploy/model_executor/layers/quantization/kv_cache.py | Skips scale loading for int2 attention |
| fastdeploy/model_executor/layers/attention/dynamic_quant_int2_attn_backend.py | New backend implementation for int2 quantized attention |
| fastdeploy/model_executor/layers/attention/attention.py | Adds c16 cache buffers for int2 attention |
| fastdeploy/model_executor/layers/attention/init.py | Exports new backend class |
| fastdeploy/model_executor/forward_meta.py | Adds prompt_lens and step_idx fields |
| fastdeploy/engine/sched/resource_manager_v1.py | Custom token scheduling for int2 attention |
| custom_ops/setup_ops.py | Includes new int2 attention CUDA sources |
| custom_ops/gpu_ops/moba_attn/moba_process/split_qkv_and_rope.cu | Code formatting improvements |
| custom_ops/gpu_ops/moba_attn/moba_attn_utils.hpp | Code formatting and adds template specializations |
| custom_ops/gpu_ops/dynamic_quant_int2_attn/*.cu | New CUDA kernel implementations for int2 operations |
| custom_ops/gpu_ops/dynamic_quant_int2_attn/*.hpp | Header files defining kernel interfaces and utilities |
fastdeploy/model_executor/layers/attention/dynamic_quant_int2_attn_backend.py
Outdated
Show resolved
Hide resolved
fastdeploy/model_executor/layers/attention/dynamic_quant_int2_attn_backend.py
Outdated
Show resolved
Hide resolved
jeff41404
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM,新增了dynamic_quant_get_kv_from_cache 等自定义算子,单测中包括组合算子实现版本
|
|
||
| if "FD_ATTENTION_BACKEND" in os.environ and os.environ["FD_ATTENTION_BACKEND"] == "DYNAMIC_QUANT_INT2_ATTN": | ||
| remain_tokens = request.need_prefill_tokens - request.prefill_end_index | ||
| if remain_tokens < self.config.scheduler_config.max_num_batched_tokens: | ||
| # last chunk | ||
| return remain_tokens | ||
| else: | ||
| return self.config.scheduler_config.max_num_batched_tokens | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- 环境变量的使用需要import from envs
- 这一段代码的思路是什么?看起来与下面的逻辑不兼容的
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
c2 使用现在的调度会有问题,现在调度的时候,第一个query剩下的token小于chunk_size的时候,会拉取第二个query的token,c2 支持不了这种情况
| "IluvatarAttnBackend", | ||
| "BlockAttentionBackend", | ||
| "Attention", | ||
| "DynamciQuantInt2AttentionBackend", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
为什么选择使用环境变量还决定是否跑cache kv int2,而不是通过cache_kv _quant_type int2来判断?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
之前不是说,动态量化不能写在config里面吗,config是模型的信息
yuanlehome
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
using_int2_attn相关分支逻辑需要再优化一下
Motivation
Lower bit quantization can support longer context
Modifications
Add an int2 quantified attention backend
Usage or Command
Just need to set the environment variable FD_ATTENTIN_BackeND to DYNAMIC_QUANT-INT2-ATTN
Accuracy Tests
Refer to test_dynamic_quant_int2_attn.py
performance data
Checklist
[FDConfig],[APIServer],[Engine],[Scheduler],[PD Disaggregation],[Executor],[Graph Optimization],[Speculative Decoding],[RL],[Models],[Quantization],[Loader],[OP],[KVCache],[DataProcessor],[BugFix],[Docs],[CI],[Optimization],[Feature],[Benchmark],[Others],[XPU],[HPU],[GCU],[DCU],[Iluvatar],[Metax]]pre-commitbefore commit.releasebranch, make sure the PR has been submitted to thedevelopbranch, then cherry-pick it to thereleasebranch with the[Cherry-Pick]PR tag.