Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ggml : fix n_threads_cur initialization with one thread #9538

Merged
merged 2 commits into from
Sep 18, 2024

Conversation

slaren
Copy link
Collaborator

@slaren slaren commented Sep 18, 2024

Fixes #9535

@github-actions github-actions bot added the ggml changes relating to the ggml tensor library for machine learning label Sep 18, 2024
Copy link
Owner

@ggerganov ggerganov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It fixes the issue on my end

ggml/src/ggml.c Outdated Show resolved Hide resolved
Copy link
Collaborator

@max-krasnyansky max-krasnyansky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep. This fixes Metal with OMP.
Sorry for missing that case.

@max-krasnyansky max-krasnyansky merged commit 64c6af3 into master Sep 18, 2024
53 checks passed
dsx1986 pushed a commit to dsx1986/llama.cpp that referenced this pull request Oct 29, 2024
* ggml : fix n_threads_cur initialization with one thread

* Update ggml/src/ggml.c

---------

Co-authored-by: Max Krasnyansky <quic_maxk@quicinc.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ggml changes relating to the ggml tensor library for machine learning
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Bug: llama-cli generates incoherent output with full gpu offload
3 participants