Open
Description
Tracker for: ggml-org/llama.cpp#5138 and also ROCm
- Vulkan: feat(vulkan): add vulkan support to the llama.cpp backend #2648 (upstream Vulkan Implementation ggml-org/llama.cpp#2059 )
- Kompute: (upstream Nomic Vulkan backend ggml-org/llama.cpp#4456 )
- sycl: feat(sycl): Add support for Intel GPUs with sycl (#1647) #1660 ( upstream Feature: Integrate with unified SYCL backend for Intel GPUs ggml-org/llama.cpp#2690 )
- ROCm: Build docker container for ROCm #1595 (feat(llama.cpp): enable ROCm/HIPBLAS support #1100)