Skip to content

vulkan: force device 0 in CI #14106

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Jun 10, 2025
Merged

Conversation

jeffbolznv
Copy link
Collaborator

No description provided.

@github-actions github-actions bot added the devops improvements to build systems and github actions label Jun 10, 2025
@jeffbolznv jeffbolznv requested review from slaren and 0cc4m June 10, 2025 15:19
@jeffbolznv
Copy link
Collaborator Author

This had the desired effect. The other CI error is a known issue.

@jeffbolznv jeffbolznv merged commit 652b70e into ggml-org:master Jun 10, 2025
42 of 43 checks passed
gabe-l-hart added a commit to gabe-l-hart/llama.cpp that referenced this pull request Jun 10, 2025
* origin/master:
llama : support GEGLU for jina-bert-v2 (ggml-org#14090)
vulkan: force device 0 in CI (ggml-org#14106)
Fixed spec timings to: accepted/tested instead of accepted/drafted (ggml-org#14104)
sync : ggml
ggml : fix weak alias win32 (whisper/0)
Vulkan: Don't default to CPU device (like llvmpipe), even if no other device is available, to allow fallback to CPU backend (ggml-org#14099)
rpc : nicer error messages for RPC server crash (ggml-org#14076)
sync : ggml
Add in-build ggml::ggml ALIAS library (ggml/1260)
metal : use less stack memory in FA kernel (ggml-org#14088)
kv-cache : fix shift and defrag logic (ggml-org#14081)
llama : allow building all tests on windows when not using shared libs (ggml-org#13980)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
devops improvements to build systems and github actions
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants