Replies: 1 comment
-
|
Any update on this? I notice that the Windows version offers cuda 12 an 13 after installation, but Linux doesnt offer cuda at all on my end. Im using Fedora 43 and tried appimage and flatpak version. Both have the same issue |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hey guys,
I know there is somewhat of an auto integration of Llama.cpp. When I set it up and ran a model via the built-in llama.cpp, it ran via CPU instead of Nvidia GPU. I am using RTX 5090. What settings should be configured for openwebui + llama.cpp to pickup and use the gpu instead?
Beta Was this translation helpful? Give feedback.
All reactions