-
-
Notifications
You must be signed in to change notification settings - Fork 10.3k
[Attention] Support multiple attention metadata builders per kv_cache_spec + proper local attention no hybrid kv cache fix #21588
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
An excellent and comprehensive refactoring effort! The introduction of AttentionGroup
and the dynamic wrapping for local attention significantly improve the modularity and maintainability of the attention backend handling. This new approach is much cleaner for supporting heterogeneous attention mechanisms within a model.
I've identified one area for improvement to enhance the robustness of the new implementation. Please see my detailed comment below.
Once that's addressed, this looks like a solid contribution.
self.attn_metadata_builders | ||
) == 0, "Attention backends are already initialized" | ||
for i, kv_cache_group_spec in enumerate( | ||
attn_layers = get_layers_from_vllm_config(self.vllm_config, Attention) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The previous implementation of this function included an assertion to ensure it's not called more than once (assert len(self.attn_backends) == 0 and len(self.attn_metadata_builders) == 0
). This assertion has been removed in the refactoring.
To maintain robustness and prevent accidental re-initialization which could lead to a corrupted state, it would be good to add a similar assertion back, checking the new state attributes (self.attn_groups
or self.attn_groups_dict
).
assert not self.attn_groups, "Attention backends are already initialized"
attn_layers = get_layers_from_vllm_config(self.vllm_config, Attention)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks, this is looking great
|
||
for layer_name in kv_cache_group_spec.layer_names: | ||
attn_metadata[layer_name] = attn_metadata_i | ||
for layer_name in attn_group.layer_names: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need further changes to support cross-layer KV sharing. Previous to this PR, we add the KV-reusing layers to .layer_names
of the KV cache group of the target layer, which ensures that attn_metadata
is populated for these layers. With this PR, this depends on attn_group.layer_names
which is populated before KV sharing logic is executed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you describe the changes needed? and the best model/command to test them with? that would be super helpful (in not that spun-up on kv-cache sharing)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry, you can try it out with gemma3n:
vllm serve google/gemma-3n-E2B-it --disable-log-requests
or run the unit test:
pytest tests/v1/worker/test_gpu_model_runner.py -k "test_init_kv_cache"
But it looks like you've already handled this in your latest commits (assuming one attention group per KV cache group)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tested!
lm_eval --model vllm --model_args '{"pretrained": "google/gemma-3n-E2B-it"}' --tasks gsm8k --batch_size auto
...
|Tasks|Version| Filter |n-shot| Metric | |Value | |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k| 3|flexible-extract| 5|exact_match|↑ |0.6406|± |0.0132|
| | |strict-match | 5|exact_match|↑ |0.6399|± |0.0132|
vllm/v1/attention/backends/utils.py
Outdated
#print(f"Building local attention metadata builder {type(self)}") | ||
# TODO(lucas): this requires the attention metadata builder save the | ||
# kv_cache_spec, as an attribute; we maybe can do something better here | ||
common_attn_metadata = make_local_attention_virtual_batches( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should also think about how different transformations on the common attn metadata can compose with each other. e.g. for yoco we need to modify the metadata prior to make_local_attention_virtual_batches
.
I guess we can stack these like: make_decode_only_metadata_builder(make_local_attention_metadata_builder(...))
This pull request has merge conflicts that must be resolved before it can be |
57905cf
to
a5f9db2
Compare
This pull request has merge conflicts that must be resolved before it can be |
assert len(attn_groups[group_idx]) == 1, ( | ||
"Only one attention group per KV cache group is supported " | ||
"for KV-cache sharing for now.") | ||
# TODO(lucas): I think in the future the layers that re-use a |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To flesh this out a bit more, I'm not sure layers re-using KV cache should always be placed in a separate attention group. Let's say we have the following 8-layer config:
L0: Full Attn
L1: Local Attn
L2: Reuse L0 (Full Attn)
L3: Reuse L1 (Local Attn)
L4: Full Attn
L5: Local Attn
L6: Reuse L4 (Full Attn)
L7: Reuse L5 (Local Attn)
Then without hybrid KV cache we should have attn_groups
looking like:
[
[
AttentionGroup(FullAttnBackend, layers=[L0, L4]),
AttentionGroup(LocalAttnBackend, layers=[L1, L5])
],
]
Where should L2, L3, L6, L7 be added? L6 and L7 qualify for faster prefill by virtue of being trailing KV-sharing layers, so they need a separate attention metadata builder (and thus a separate AttentionGroup). so in my mind, it would look like the following:
[
[
AttentionGroup(FullAttnBackend, layers=[L0, L4, L2]),
AttentionGroup(LocalAttnBackend, layers=[L1, L5, L3]),
AttentionGroup(**FasterPrefill**FullAttnBackend, layers=[L6]),
AttentionGroup(**FasterPrefill**LocalAttnBackend, layers=[L7]),
],
]
Layers L2 and L3 have been placed in the respective AttentionGroup of its target layer. L6 and L7 are special cases (qualify for faster prefill), so they have been added as two separate AttentionGroups.
WDYT? cc: @heheda12345
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sorry im not totally spun up on this FasterPrefill optimization; why dont L2 and L3 qualify?
overall though this makes sense to me I think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why dont L2 and L3 qualify?
L2, L3, L6 and L7 all use cross-attention to reuse the shared KV caches (let's refer to them as cross-attention layers). L0, L1, L4, L5 are self-attention layers.
For cross-attention layers, during decoding we only need the KV caches of the target layers to be populated for cross-attention. This means we can apply an optimization where given N prompt tokens for a given request, we can skip N-1 tokens during prefill as each cross-attention layer only depends on its target layer having done attention with full N tokens.
For L6, its target layer is L4. L4 will do prefill with full N tokens, so L6 can just work with the single last token. Same with L7. However, for L2 and L3, we have self-attention layers L4 and L5 that come after it, which require its KV caches to be populated with the full N prompt tokens (for L6 and L7 cross-attention). This means that L4 and L5 forward() must return the correct logits for full N tokens, so it cannot skip the first N-1 tokens.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah makes sense; thanks for the detailed explanation!
a5f9db2
to
6036f68
Compare
91d1ca9
to
2a7975f
Compare
This pull request has merge conflicts that must be resolved before it can be |
…_spec + proper local attention no hybrid kv cache fix (vllm-project#21588) Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com>
…_spec + proper local attention no hybrid kv cache fix (vllm-project#21588) Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com> Signed-off-by: jingyu <jingyu@omniml.ai>
…_spec + proper local attention no hybrid kv cache fix (vllm-project#21588) Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com> Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
…_spec + proper local attention no hybrid kv cache fix (vllm-project#21588) Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com> Signed-off-by: Noam Gat <noamgat@gmail.com>
Summary: vllm-project#21588 added support for multiple attention metadata builders per kv-cache spec. As part of this change, each KV cache group now maps to one or more `AttentionGroup`, with one attention group being created for each type of attention backend used. However, if we want to enable KV sharing when we have more than one attention group, we run into the following assertion: ``` assert len(attn_groups[group_idx]) == 1, ( "Only one attention group per KV cache group is supported " "for KV-cache sharing for now.") ``` This PR adds support to make this implementation more flexible, such that we can support KV cache sharing when there are multiple attention groups per KV cache group. Test Plan: new added unit test passes: ``` pytest tests/v1/test_kv_sharing.py ``` Rollback Plan: Differential Revision: D80020191
Summary: vllm-project#21588 added support for multiple attention metadata builders per kv-cache spec. As part of this change, each KV cache group now maps to one or more `AttentionGroup`, with one attention group being created for each type of attention backend used. However, if we want to enable KV sharing when we have more than one attention group, we run into the following assertion: ``` assert len(attn_groups[group_idx]) == 1, ( "Only one attention group per KV cache group is supported " "for KV-cache sharing for now.") ``` This PR adds support to make this implementation more flexible, such that we can support KV cache sharing when there are multiple attention groups per KV cache group. Test Plan: new added unit test passes: ``` pytest tests/v1/test_kv_sharing.py ``` Rollback Plan: Differential Revision: D80020191
Summary: vllm-project#21588 added support for multiple attention metadata builders per kv-cache spec. As part of this change, each KV cache group now maps to one or more `AttentionGroup`, with one attention group being created for each type of attention backend used. However, if we want to enable KV sharing when we have more than one attention group, we run into the following assertion: ``` assert len(attn_groups[group_idx]) == 1, ( "Only one attention group per KV cache group is supported " "for KV-cache sharing for now.") ``` This PR adds support to make this implementation more flexible, such that we can support KV cache sharing when there are multiple attention groups per KV cache group. Test Plan: new added unit test passes: ``` pytest tests/v1/test_kv_sharing.py ``` Rollback Plan: Differential Revision: D80020191
…_spec + proper local attention no hybrid kv cache fix (vllm-project#21588) Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com> Signed-off-by: Avery Yingyi Huang <yingyihuang2000@outlook.com>
…_spec + proper local attention no hybrid kv cache fix (vllm-project#21588) Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com>
…_spec + proper local attention no hybrid kv cache fix (vllm-project#21588) Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com>
…_spec + proper local attention no hybrid kv cache fix (vllm-project#21588) Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com> Signed-off-by: Paul Pak <paulpak58@gmail.com>
…_spec + proper local attention no hybrid kv cache fix (vllm-project#21588) Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com>
Summary: vllm-project#21588 added support for multiple attention metadata builders per kv-cache spec. As part of this change, each KV cache group now maps to one or more `AttentionGroup`, with one attention group being created for each type of attention backend used. However, if we want to enable KV sharing when we have more than one attention group, we run into the following assertion: ``` assert len(attn_groups[group_idx]) == 1, ( "Only one attention group per KV cache group is supported " "for KV-cache sharing for now.") ``` This PR adds support to make this implementation more flexible, such that we can support KV cache sharing when there are multiple attention groups per KV cache group. Test Plan: new added unit test passes: ``` pytest tests/v1/test_kv_sharing.py ``` Rollback Plan: Differential Revision: D80020191
…_spec + proper local attention no hybrid kv cache fix (vllm-project#21588) Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com> Signed-off-by: Boyuan Feng <boyuan@meta.com>
Summary: vllm-project#21588 added support for multiple attention metadata builders per kv-cache spec. As part of this change, each KV cache group now maps to one or more `AttentionGroup`, with one attention group being created for each type of attention backend used. However, if we want to enable KV sharing when we have more than one attention group, we run into the following assertion: ``` assert len(attn_groups[group_idx]) == 1, ( "Only one attention group per KV cache group is supported " "for KV-cache sharing for now.") ``` This PR adds support to make this implementation more flexible, such that we can support KV cache sharing when there are multiple attention groups per KV cache group. Test Plan: new added unit test passes: ``` pytest tests/v1/test_kv_sharing.py ``` Rollback Plan: Differential Revision: D80020191 Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
Summary: vllm-project#21588 added support for multiple attention metadata builders per kv-cache spec. As part of this change, each KV cache group now maps to one or more `AttentionGroup`, with one attention group being created for each type of attention backend used. However, if we want to enable KV sharing when we have more than one attention group, we run into the following assertion: ``` assert len(attn_groups[group_idx]) == 1, ( "Only one attention group per KV cache group is supported " "for KV-cache sharing for now.") ``` This PR adds support to make this implementation more flexible, such that we can support KV cache sharing when there are multiple attention groups per KV cache group. Test Plan: new added unit test passes: ``` pytest tests/v1/test_kv_sharing.py ``` Rollback Plan: Differential Revision: D80020191
…_spec + proper local attention no hybrid kv cache fix (vllm-project#21588) Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com> Signed-off-by: Diego-Castan <diego.castan@ibm.com>
Summary: vllm-project#21588 added support for multiple attention metadata builders per kv-cache spec. As part of this change, each KV cache group now maps to one or more `AttentionGroup`, with one attention group being created for each type of attention backend used. However, if we want to enable KV sharing when we have more than one attention group, we run into the following assertion: ``` assert len(attn_groups[group_idx]) == 1, ( "Only one attention group per KV cache group is supported " "for KV-cache sharing for now.") ``` This PR adds support to make this implementation more flexible, such that we can support KV cache sharing when there are multiple attention groups per KV cache group. Test Plan: new added unit test passes: ``` pytest tests/v1/test_kv_sharing.py ``` Rollback Plan: Differential Revision: D80020191 Signed-off-by: Yong Hoon Shin <yhshin@meta.com>
…_spec + proper local attention no hybrid kv cache fix (vllm-project#21588) Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com>
…_spec + proper local attention no hybrid kv cache fix (vllm-project#21588) Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com>
…_spec + proper local attention no hybrid kv cache fix (vllm-project#21588) Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com> Signed-off-by: Xiao Yu <xiao.yu@amd.com>
…_spec + proper local attention no hybrid kv cache fix (vllm-project#21588) Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com> Signed-off-by: Xiao Yu <xiao.yu@amd.com>
…_spec + proper local attention no hybrid kv cache fix (vllm-project#21588) Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com>
…_spec + proper local attention no hybrid kv cache fix (vllm-project#21588) Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com>
…_spec + proper local attention no hybrid kv cache fix (vllm-project#21588) Signed-off-by: Lucas Wilkinson <lwilkins@redhat.com>
Essential Elements of an Effective PR Description Checklist
supported_models.md
andexamples
for a new model.Purpose
Support multiple attention metadata builders per kv-cache spec so we can undo the hacky fix in #21707
Test Plan
Use the same ruler task as in: #21707
Test Result
(Optional) Documentation Update