Skip to content

[HOT-FIX] Add kv_sharing_target_layer_name argument to cutlass_mla backend #19374

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 2 additions & 1 deletion vllm/v1/attention/backends/mla/cutlass_mla.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,12 +40,13 @@ def __init__(
blocksparse_params: Optional[dict[str, Any]],
logits_soft_cap: Optional[float],
attn_type: str,
kv_sharing_target_layer_name: Optional[str],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Could you clarify the intent behind adding kv_sharing_target_layer_name as part of this hot-fix?

The superclass MLACommonImpl (specifically in vllm/v1/attention/backends/mla/common.py lines 599-600) has a check:

if kv_sharing_target_layer_name is not None:
    raise NotImplementedError("KV sharing is not supported for MLA")

This means if a non-None value is passed for kv_sharing_target_layer_name, the initialization will fail.

Is the purpose of this change primarily to achieve signature compatibility with other attention backends that might be instantiated with this parameter, even if CutlassMLAImpl doesn't support the feature itself? Or is this a preparatory step for future implementation?

Understanding this will help assess if the current behavior (raising NotImplementedError if the feature is attempted to be used) is the desired outcome for this hot-fix.

# MLA Specific Arguments
**mla_args) -> None:
super().__init__(num_heads, head_size, scale, num_kv_heads,
alibi_slopes, sliding_window, kv_cache_dtype,
blocksparse_params, logits_soft_cap, attn_type,
**mla_args)
kv_sharing_target_layer_name, **mla_args)

unsupported_features = [
alibi_slopes, sliding_window, blocksparse_params, logits_soft_cap
Expand Down