You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The macOS situation is a bit easier to handle because there is only the Metal backend, and we know that it is available on every device. For windows and linux we would need different builds for CPU AVX, CPU AVX2, CPU AVX512, CUDA, HIP, SYCL, Vulkan,.. which is not very user friendly to say the least.
Prerequisites
Feature Description
On macos/linux, user can install a pre-built version llama.cpp easily via
brew
It would be nice to have the equivalent to that on windows, via
winget
Motivation
The pre-built binary is already available via releases: https://github.com/ggerganov/llama.cpp/releases
It would be nice to somehow push them to https://winget.run/
However, I'm not familiar with working on windows, so I create this issue to further discuss and to look for help from the community.
Possible Implementation
No response
The text was updated successfully, but these errors were encountered: