Skip to content

vulkan: fix rms_norm+mul fusion #14545

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jul 6, 2025
Merged

Conversation

jeffbolznv
Copy link
Collaborator

The fused operation was grabbing the epsilon value from the wrong place.

Add an env var to disable fusion.

Add some missing checks for supported shapes/types.

Handle fused rms_norm+mul in check_results.

Fixes #14540.

The fused operation was grabbing the epsilon value from the wrong place.

Add an env var to disable fusion.

Add some missing checks for supported shapes/types.

Handle fused rms_norm+mul in check_results.
@jeffbolznv jeffbolznv requested a review from 0cc4m July 5, 2025 22:53
@github-actions github-actions bot added testing Everything test related Vulkan Issues specific to the Vulkan backend ggml changes relating to the ggml tensor library for machine learning labels Jul 5, 2025
@MaggotHATE
Copy link
Contributor

These changes fix both Mistral 7b at Q8 and the spaces issue with Mistral 3.2 at Q3_K_L. Thank you!

@0cc4m
Copy link
Collaborator

0cc4m commented Jul 6, 2025

These changes fix both Mistral 7b at Q8 and the spaces issue with Mistral 3.2 at Q3_K_L. Thank you!

Does it also fix the other model issues you mentioned (Mamba something)?

Copy link
Collaborator

@0cc4m 0cc4m left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thank you for the fix.

@0cc4m 0cc4m merged commit e592be1 into ggml-org:master Jul 6, 2025
48 checks passed
@MaggotHATE
Copy link
Contributor

Does it also fix the other model issues you mentioned (Mamba something)?

Unfortunately, I've already deleted that model since it wasn't very stable in general (doesn't stop properly on short responses even on very low temperature), regardless of backend. I'll have to look at a better Mamba-based model to test it properly.

gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Jul 7, 2025
* origin/master:
CUDA: add bf16 and i32 to getrows (ggml-org#14529)
vulkan: increase LOAD_VEC_A to 8 (IQ1/IQ2) or 4 (IQ3) (ggml-org#14485)
vulkan: fix rms_norm+mul fusion (ggml-org#14545)
vulkan: Handle updated FA dim2/3 definition (ggml-org#14518)
server : fix assistant prefilling when content is an array (ggml-org#14360)
opencl: add GELU_ERF (ggml-org#14476)
eval-callback : check for empty input (ggml-org#14539)
test-backend-ops: add support for specifying output format (ggml-org#14368)
metal : disable fast math in all quantize kernels (ggml-org#14528)
batch : add optional for sequential equal split (ggml-org#14511)
graph : prepare for 4D mask (ggml-org#14515)
batch : add n_used count (ggml-org#14512)
CANN: Replace aclrtMemsetSync with aclnnInplaceZero operator (ggml-org#14002)
ggml : implement GEGLU_ERF and GEGLU_QUICK ops (ggml-org#14445)
qnixsynapse pushed a commit to menloresearch/llama.cpp that referenced this pull request Jul 10, 2025
The fused operation was grabbing the epsilon value from the wrong place.

Add an env var to disable fusion.

Add some missing checks for supported shapes/types.

Handle fused rms_norm+mul in check_results.
qnixsynapse pushed a commit to menloresearch/llama.cpp that referenced this pull request Jul 10, 2025
The fused operation was grabbing the epsilon value from the wrong place.

Add an env var to disable fusion.

Add some missing checks for supported shapes/types.

Handle fused rms_norm+mul in check_results.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning testing Everything test related Vulkan Issues specific to the Vulkan backend
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Eval bug: Incoherence in Mistral 7B Q8_0 on Vulkan backend
3 participants