Skip to content

miyako/llama-cpp

Repository files navigation

version platform license downloads

llama.cpp

Local inference engine

aknowledgements: ggml-org/llama.cpp

Apple Silicon

  • set BUILD_SHARED_LIBS to FALSE

Intel

  • set GGML_CPU to FALSE
  • set CMAKE_OSX_ARCHITECTURES to x86_64

Windows

set LLAMA_CURL to FALSE c.f. ggml-org/llama.cpp#9937

cmake -S . -B build -A x64
 -DLLAMA_STATIC=ON
 -DLLAMA_DIRECTML=ON
 -DCMAKE_TOOLCHAIN_FILE={...\vcpkg\scripts\buildsystems\vcpkg.cmake}
 -DCURL_INCLUDE_DIR={\vcpkg\installed\x64-windows-static\include}
 -DCURL_LIBRARY={\vcpkg\installed\x64-windows-static\lib\libcurl.lib}
 -DLLAMA_BUILD_SERVER=ON
cmake --build build --config Release
  • open project sith visual studio
  • add curl include paths
  • add libraries
Crypt32.lib
Secur32.lib
Iphlpapi.lib
libcurl.lib
zlib.lib
ws2_32.lib
  • build each target with MT