Local inference engine
aknowledgements: ggml-org/llama.cpp
- set
BUILD_SHARED_LIBStoFALSE
- set
GGML_CPUtoFALSE - set
CMAKE_OSX_ARCHITECTUREStox86_64
set
LLAMA_CURL to FALSEc.f. ggml-org/llama.cpp#9937
cmake -S . -B build -A x64
-DLLAMA_STATIC=ON
-DLLAMA_DIRECTML=ON
-DCMAKE_TOOLCHAIN_FILE={...\vcpkg\scripts\buildsystems\vcpkg.cmake}
-DCURL_INCLUDE_DIR={\vcpkg\installed\x64-windows-static\include}
-DCURL_LIBRARY={\vcpkg\installed\x64-windows-static\lib\libcurl.lib}
-DLLAMA_BUILD_SERVER=ON
cmake --build build --config Release
- open project sith visual studio
- add curl include paths
- add libraries
Crypt32.lib
Secur32.lib
Iphlpapi.lib
libcurl.lib
zlib.lib
ws2_32.lib
- build each target with
MT