-
Notifications
You must be signed in to change notification settings - Fork 13.9k
Load Sliding Window Attention (SWA) pattern from GGUF metadata (qwen3) #17597
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
The tests pass locally. |
|
@CISC It was initially made to reflect the IIRC it was a last-minute change because ollama engine requires this KV (so that users can freely use the same GGUF on either ollama or llama.cpp) Around that time, there was another model was also using this metadata, but it was never made to the public. So does the code to handle it. So honestly I don't have strong opinion either we should merge this PR or not. It seems like adding it doesn't breaking anything anw (because for most models, the pattern will be overwritten with |
|
I was thinking between:
I choose option 2 because swa_types already exists, and we don't have to choose full attention first or last as a separate param. |
|
realistically, there is no models to this day use more than 1 type of so |
|
yeah the type is bool (sliding attention or full attention). Only LLM_ARCH_LLAMA4 uses chunked attention. It seems like the trend is linear attention instead of swa in the future, so I am going to keep bool as is instead of enum. |
src/llama-model.cpp
Outdated
|
|
||
| ml.get_key(LLM_KV_ATTENTION_SLIDING_WINDOW, hparams.n_swa, false); | ||
| if (hparams.n_swa > 0) { | ||
| hparams.swa_type = LLAMA_SWA_TYPE_STANDARD; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we can assume anything about swa_type type here. Please remove this line
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line is required; otherwise swa won't activate. This similar structure is used below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't get it. how does swa not get activate?
swa_type should be set per-model. Please remove this line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
case LLAMA_SWA_TYPE_STANDARD:
{
if (p1 - p0 >= (int32_t) n_swa) {
return true;
}
} break;
swa requires specific masking to work. Without it, it will generate gibberish. The default semantic "standard". I think specific models can set to chunked or symmetric.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
src/llama-model.cpp
Outdated
| } else { | ||
| hparams.swa_type = LLAMA_SWA_TYPE_NONE; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove this branch too, it's redundant
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can remove this because it is LLAMA_SWA_TYPE_NONE by default, but it seems like LLAMA_SWA_TYPE_NONE is added too below. I added it just in case.
|
Can also test this PR with one of the models using SWA (for example, gemma3n) to confirm if it does not break anything? |
|
hmm, on second thought, I think I don't quite like this change as it read everything at global level. this affects all models and potentially break things, because we cannot test all GGUF in the wild so unless you can say otherwise, I prefer closing this PR and only implement this when a new model actually need it |
|
I tested with gemma3n: https://huggingface.co/unsloth/gemma-3n-E2B-it-GGUF/blob/main/gemma-3n-E2B-it-Q4_0.gguf |
|
This should be low-risk because we only read swa_pattern from metadata if n_swa > 0. If the swa_pattern metadata is not present, it is false (full attention). |
|
@ngxson I technically only need qwen3. I updated the pr to address your concern. |
|
Why? Qwen3 doesn't need it. IMO you shouldn't just push something only because you need it |
Are you working on a Qwen3 based model with SWA? |
Yes |
|
If that's the case, you should have mentioned explicitly in the description. Otherwise we have no idea what this is used for. Just note that we generally don't accept these kinds of changes unless the model is public, or about to be public. We cannot verify of this actually work if we don't have access to the model. |
|
technically qwen > 1.5 should already test with swa. QwenLM/Qwen3#168. This is also included in huggingface. |
Can you specify exactly which model? Qwen3 does not have such setting in the config.json. Honestly it will be way easier for us if you just link to the exact model and Also, just want to note that you also need to handle this in the conversion script for qwen model. |
The mentioned discussion is from 2024 and it was about qwen2. They reused the same repo, just rename it. There is no explicit confirmation whether qwen3 support SWA or not. Honestly, the motivation of this PR is unclear and there is nothing to test it. |
|
https://huggingface.co/docs/transformers/main/en/model_doc/qwen3#transformers.Qwen3Config.max_window_layers The goal of the pr is stated in the description; the suggetion done later makes it confusing. I wanted to support all models. You claimed that there will be a risk that it will break other models, so I reduced the scope to only qwen3 model. You asked me to tested with gemma3n before, I tested it. Now you are asking for a specific qwen3 model where there is a clear huggingface support for swa but qwen team did not release a model with it. We have been discussing for non-technical view points, and I found it non-productive. In the end, this pr is further away from being merged after all the work. Lets take a step back. what are the reasonable steps to support swa in general? what do you think should be included from here? |
|
@taylorchu I got frustrated because you only mentioned about Qwen and the
And no, we cannot "automatically" support SWA for all models. You already discovered the reason: each model graph have to manually make a call to And obviously changes you included here won't work without handling it in GGUF conversion script. So that's why I asked for the Qwen config. There is nowhere in the discussion that I mentioned about a non-technical thing. You can't either assume that Qwen will eventually release a model supporting SWA, they do not explicitly confirm that. If they never release one (or maybe yes but with a different implementation), then this will effectively become a dead code. So no, it is not a good practice to implement features that we cannot test (in other words, Qwen model with SWA does not exist at this moment) |
|
First of all, thanks for reviewing anyway. I refer to the non-technical view point as "we only support a public, already-released model actually uses them; support is never added preemptively just because Hugging Face Transformers code or a config field exists." It is a fine policy if you clearly state so in the readme. The test model can be trained with huggingface with small datasets, and can be verified after. None of what you said is a true technical blocker, even |
The SWA pattern key was originally introduced in #14400.
This PR changes the behavior so that llama.cpp now reads the sliding window attention pattern directly from the GGUF file (e.g. attention.sliding_window_pattern metadata when present).
This brings the following benefits:
The loading precedence is: