Skip to content

Conversation

@yangjianfengo1
Copy link
Contributor

@yangjianfengo1 yangjianfengo1 commented Nov 22, 2025

Motivation

Lower bit quantization can support longer context

Modifications

Add an int2 quantified attention backend

Usage or Command

Just need to set the environment variable FD_ATTENTIN_BackeND to DYNAMIC_QUANT-INT2-ATTN

export FD_ATTENTION_BACKEND=DYNAMIC_QUANT_INT2_ATTN

python -m fastdeploy.entrypoints.openai.api_server \
    --model baidu/ERNIE-4.5-21B-A3B-Paddle  \
    --port 8189 \
    --tensor-parallel-size 1 \
    --enable-chunked-prefill \
    --max-num-batched-tokens 8192 \
    --max-model-len 131072 \
    --max-num-seqs 96 \
    --num-gpu-blocks-override 100000 \
    --gpu-memory-utilization 0.85

Accuracy Tests

Refer to test_dynamic_quant_int2_attn.py

performance data

image

Checklist

  • Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • Format your code, run pre-commit before commit.
  • Add unit tests. Please write the reason in this PR if no unit tests.
  • Provide accuracy results.
  • If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@paddle-bot
Copy link

paddle-bot bot commented Nov 22, 2025

Thanks for your contribution!

@codecov-commenter
Copy link

codecov-commenter commented Nov 22, 2025

Codecov Report

❌ Patch coverage is 51.92308% with 50 lines in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@c06cfe2). Learn more about missing BASE report.

Files with missing lines Patch % Lines
...ayers/attention/dynamic_quant_int2_attn_backend.py 59.42% 26 Missing and 2 partials ⚠️
fastdeploy/worker/gpu_model_runner.py 18.18% 6 Missing and 3 partials ⚠️
...eploy/model_executor/layers/attention/attention.py 12.50% 5 Missing and 2 partials ⚠️
fastdeploy/engine/sched/resource_manager_v1.py 16.66% 4 Missing and 1 partial ⚠️
...loy/model_executor/layers/quantization/kv_cache.py 66.66% 0 Missing and 1 partial ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             develop    #5173   +/-   ##
==========================================
  Coverage           ?   59.71%           
==========================================
  Files              ?      318           
  Lines              ?    38785           
  Branches           ?     5826           
==========================================
  Hits               ?    23159           
  Misses             ?    13787           
  Partials           ?     1839           
Flag Coverage Δ
GPU 59.71% <51.92%> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@qingqing01 qingqing01 requested a review from Copilot November 24, 2025 07:10
Copilot finished reviewing on behalf of qingqing01 November 24, 2025 07:11
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR implements dynamic int2 quantization for KV cache to enable longer context support in LLM inference. The feature adds a new attention backend (DYNAMIC_QUANT_INT2_ATTN) that uses 2-bit quantization with dynamic scaling for key-value cache compression.

Key Changes:

  • Adds DynamciQuantInt2AttentionBackend for int2 quantized KV cache
  • Implements CUDA kernels for encoder/decoder cache writing and attention computation
  • Integrates with existing attention pipeline through environment variable configuration
  • Includes comprehensive unit tests for the new functionality

Reviewed changes

Copilot reviewed 21 out of 21 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
tests/layers/test_dynamic_quant_int2_attn.py New test file with unit tests and integration tests for int2 attention
fastdeploy/worker/gpu_model_runner.py Adds int2 attention backend detection and KV cache initialization
fastdeploy/platforms/cuda.py Registers DYNAMIC_QUANT_INT2_ATTN backend
fastdeploy/platforms/base.py Adds DYNAMIC_QUANT_INT2_ATTN enum value
fastdeploy/model_executor/layers/quantization/kv_cache.py Skips scale loading for int2 attention
fastdeploy/model_executor/layers/attention/dynamic_quant_int2_attn_backend.py New backend implementation for int2 quantized attention
fastdeploy/model_executor/layers/attention/attention.py Adds c16 cache buffers for int2 attention
fastdeploy/model_executor/layers/attention/init.py Exports new backend class
fastdeploy/model_executor/forward_meta.py Adds prompt_lens and step_idx fields
fastdeploy/engine/sched/resource_manager_v1.py Custom token scheduling for int2 attention
custom_ops/setup_ops.py Includes new int2 attention CUDA sources
custom_ops/gpu_ops/moba_attn/moba_process/split_qkv_and_rope.cu Code formatting improvements
custom_ops/gpu_ops/moba_attn/moba_attn_utils.hpp Code formatting and adds template specializations
custom_ops/gpu_ops/dynamic_quant_int2_attn/*.cu New CUDA kernel implementations for int2 operations
custom_ops/gpu_ops/dynamic_quant_int2_attn/*.hpp Header files defining kernel interfaces and utilities

Copy link

@jeff41404 jeff41404 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM,新增了dynamic_quant_get_kv_from_cache 等自定义算子,单测中包括组合算子实现版本

Comment on lines +360 to +368

if "FD_ATTENTION_BACKEND" in os.environ and os.environ["FD_ATTENTION_BACKEND"] == "DYNAMIC_QUANT_INT2_ATTN":
remain_tokens = request.need_prefill_tokens - request.prefill_end_index
if remain_tokens < self.config.scheduler_config.max_num_batched_tokens:
# last chunk
return remain_tokens
else:
return self.config.scheduler_config.max_num_batched_tokens

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. 环境变量的使用需要import from envs
  2. 这一段代码的思路是什么?看起来与下面的逻辑不兼容的

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

c2 使用现在的调度会有问题,现在调度的时候,第一个query剩下的token小于chunk_size的时候,会拉取第二个query的token,c2 支持不了这种情况

"IluvatarAttnBackend",
"BlockAttentionBackend",
"Attention",
"DynamciQuantInt2AttentionBackend",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

为什么选择使用环境变量还决定是否跑cache kv int2,而不是通过cache_kv _quant_type int2来判断?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

之前不是说,动态量化不能写在config里面吗,config是模型的信息

Copy link
Collaborator

@yuanlehome yuanlehome left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

using_int2_attn相关分支逻辑需要再优化一下

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants