Skip to content

Conversation

@JohannesGaessler
Copy link
Collaborator

Fixes #18090 .

It seems that (unbeknownst to me) the code is trying to use a memory lock even when memory mapping is disabled which results in a crash. This PR force disables mlock for llama_params_fit.

@JohannesGaessler JohannesGaessler merged commit d0794e8 into ggml-org:master Dec 16, 2025
67 of 71 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Misc. bug: Regression: GGML_ASSERT(addr) failed in llama-mmap.cpp on RTX 5090 (Blackwell) - works in b7376, fails in b7410+

2 participants