Skip to content

Conversation

@JohannesGaessler
Copy link
Collaborator

Currently when no GPU is available at runtime the warning message says that llama.cpp was compiled without GPU support. However, there are also other circumstances under which the warning can be printed, for example when CUDA_VISIBLE_DEVICES=-1 is set. This PR adjusts the warnings to reflect the logic of the code.

@JohannesGaessler JohannesGaessler merged commit 8907193 into ggml-org:master Nov 28, 2024
50 checks passed
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Dec 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants