Skip to content

[WIP] Llama 4: Hybrid KV buffer (disable radix attention) #5853

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

tarinkk
Copy link
Contributor

@tarinkk tarinkk commented Apr 28, 2025

Motivation

We set KV buffer of different sizes for global and local attention layers in Llama 4 for better memory usage.

Modifications

We modify the size of KV buffer in MHATokenToKVPool and set two TokenToKVPoolAllocator for global and local attention.
The hybrid ratio between 0 to 1 can be set by --enable-hybrid-kvcache. default set is 1.0 (0 = pure uniform: local_size / global_size = 1, 1.0 = pure hybrid: local_size / global_size = local_attention_size / context_length)
Currently, only support cases when page size = 1, disable radix attention and disable cuda graph.

Checklist

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants