Adjust workaround for ROCWMMA_FATTN/GFX9 to only newer ROCm veresions#19591
Merged
JohannesGaessler merged 1 commit intoggml-org:masterfrom Feb 16, 2026
Merged
Adjust workaround for ROCWMMA_FATTN/GFX9 to only newer ROCm veresions#19591JohannesGaessler merged 1 commit intoggml-org:masterfrom
JohannesGaessler merged 1 commit intoggml-org:masterfrom
Conversation
Avoids issues with ROCm 6.4.4. Closes: ggml-org#19580 Fixes: 6845f7f ("Add a workaround for compilation with ROCWMMA_FATTN and gfx9 (ggml-org#19461)") Signed-off-by: Mario Limonciello (AMD) <superm1@kernel.org>
IMbackK
approved these changes
Feb 13, 2026
Collaborator
IMbackK
left a comment
There was a problem hiding this comment.
Ideally someone would also test this on 7.0 since we are not exactly sure when this change was introduced. But lets merge this with 6.4 as the cutoff to get things going.
JohannesGaessler
approved these changes
Feb 16, 2026
michaelneale
added a commit
to michaelneale/llama.cpp
that referenced
this pull request
Feb 17, 2026
* upstream/master: (88 commits) ci : bump komac version (ggml-org#19682) build : link ws2_32 as PUBLIC on Windows (ggml-org#19666) build : cleanup library linking logic (ggml-org#19665) convert : add JoyAI-LLM-Flash (ggml-org#19651) perplexity: add proper batching (ggml-org#19661) common : inline functions (ggml-org#18639) ggml : make `ggml_is_view` as API (ggml-org#19539) model: Add support for Tiny Aya Models (ggml-org#19611) build : rework llama_option_depr to handle LLAMA_CURL (ggml-org#19658) Adjust workaround for ROCWMMA_FATTN/GFX9 to only newer ROCm veresions (ggml-org#19591) models : deduplicate delta-net graphs for Qwen family (ggml-org#19597) graph : fix KQ mask, lora, cvec reuse checks (ggml-org#19644) ggml: aarch64: Implement SVE in Gemm q4_k 8x8 q8_k Kernel (ggml-org#19132) sync : ggml ggml : bump version to 0.9.7 (ggml/1425) ggml : bump version to 0.9.6 (ggml/1423) cuda: optimize iq2xxs/iq2xs/iq3xxs dequantization (ggml-org#19624) docs: update s390x build docs (ggml-org#19643) build : remove LLAMA_HTTPLIB option (ggml-org#19623) cmake : check if KleidiAI API has been fetched (ggml-org#19640) ...
liparetejas
pushed a commit
to liparetejas/llama.cpp
that referenced
this pull request
Feb 23, 2026
…ggml-org#19591) Avoids issues with ROCm 6.4.4. Closes: ggml-org#19580 Fixes: 6845f7f ("Add a workaround for compilation with ROCWMMA_FATTN and gfx9 (ggml-org#19461)") Signed-off-by: Mario Limonciello (AMD) <superm1@kernel.org>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Avoids issues with ROCm 6.4.4.
Closes: #19580
Fixes: 6845f7f ("Add a workaround for compilation with ROCWMMA_FATTN and gfx9 (#19461)")