-
Notifications
You must be signed in to change notification settings - Fork 12.1k
Vulkan: Don't default to CPU device (like llvmpipe) #14099
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
… device is available, to allow fallback to CPU backend
This is exactly what we need! Default to llvmpipe was silly (we would even warn that this is probably not what you want to do in the logs). As an aside will this work with vulkan? Auto-setting ngl for vulkan could be kinda neat: |
// If only CPU devices are available, return without devices. | ||
if (vk_instance.device_indices.empty()) { | ||
for (size_t i = 0; i < devices.size(); i++) { | ||
if (devices[i].getProperties().deviceType != vk::PhysicalDeviceType::eCpu) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's possible we want to consider other device types here too like:
if (devices[i].getProperties().deviceType != vk::PhysicalDeviceType::eCpu && devices[i].getProperties().deviceType != vk::PhysicalDeviceType::eIntegratedGpu)
Most integrated GPUs are slower than CPU inferencing, especially if an Integrated GPU has < 1GB VRAM it gets very questionable.
But could be discussion for another PR...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually looking at the various types it can be, I'd flip it:
if (devices[i].getProperties().deviceType == vk::PhysicalDeviceType::eDiscreteGpu)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is true, but there are a lot of iGPUs that run better than CPU with Vulkan, too. It is not as straightforward to decide here, we might need a black- or whitelist.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Considering we can always override with GGML_VK_VISIBLE_DEVICES, only eDiscreteGpu would get my vote
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious what's your take @a-ghorbani from the Android perspective
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Most integrated GPUs are slower than CPU inferencing, especially if an Integrated GPU has < 1GB VRAM it gets very questionable.
Considering we can always override with GGML_VK_VISIBLE_DEVICES, only eDiscreteGpu would get my vote
In the vast majority of cases an integrated GPU with Vulkan and -ngl 0
is going to perform better than CPU in prompt processing. Also pretty much all computer iGPUs that support Vulkan (so anything newer than Intel Skylake or the AMD GCN2 APUs) should be able to access several GBs of memory no problem. I'll admit I'm not sure about phones though.
If you look at the chart a lot of the newer integrated chips are running very well even with the model fully offloaded. Also Intel, AMD, and Nvidia are beginning to follow Apple by making fast iGPUs with more memory bandwidth.
Now there's a case where the CPU might win for prompt processing though and that's when you have one of those new 16 core AMD Zen 5 CPUs with the little 2 CU iGPU.
Looks like we hit a flake |
Looks like this did indeed disable our CI coverage?
IMO this needs to be fixed or reverted ASAP. |
I didn't expect it to be merged this quickly, maybe should have set it to draft. But basically you only need to set |
ok, I've made an attempt at #14106 (though I'm not an expert on github workflows) |
#14099 هاي شنو |
* origin/master: llama : support GEGLU for jina-bert-v2 (ggml-org#14090) vulkan: force device 0 in CI (ggml-org#14106) Fixed spec timings to: accepted/tested instead of accepted/drafted (ggml-org#14104) sync : ggml ggml : fix weak alias win32 (whisper/0) Vulkan: Don't default to CPU device (like llvmpipe), even if no other device is available, to allow fallback to CPU backend (ggml-org#14099) rpc : nicer error messages for RPC server crash (ggml-org#14076) sync : ggml Add in-build ggml::ggml ALIAS library (ggml/1260) metal : use less stack memory in FA kernel (ggml-org#14088) kv-cache : fix shift and defrag logic (ggml-org#14081) llama : allow building all tests on windows when not using shared libs (ggml-org#13980)
This should fix containers/ramalama#1479
llvmpipe can still be used by setting
GGML_VK_VISIBLE_DEVICES
to override automatic device selection. This may be required now to allow the Github CI test-backend-ops to run for Vulkan.