Closed
Description
Expected Behavior
When compiling and running the current main branch, it compiles without problems but when i'm trying to run main.exe
it throws an ggml_opencl: kernel compile error
Current Behavior
error message:
main: build = 763 (b8c8dda)
main: seed = 1688223346
ggml_opencl: selecting platform: 'NVIDIA CUDA'
ggml_opencl: selecting device: 'NVIDIA GeForce RTX 3090'
ggml_opencl: device FP16 support: false
13 errors generated.
ggml_opencl: kernel compile error:
<kernel>:2:14138: error: expected expression
<kernel>:2:14230: error: expected expression
<kernel>:2:14269: error: redefinition of 'is'
<kernel>:2:14222: note: previous definition is here
<kernel>:2:14282: error: expected expression
<kernel>:2:14353: error: use of undeclared identifier 'l0'
<kernel>:2:14419: error: use of undeclared identifier 'l0'
<kernel>:2:14612: error: use of undeclared identifier 'ql_offset'; did you mean 'qh_offset'?
<kernel>:2:14333: note: 'qh_offset' declared here
<kernel>:2:14766: error: expected expression
<kernel>:2:15451: error: use of undeclared identifier 'sum'
<kernel>:2:15456: error: expected expression
<kernel>:2:15507: error: use of undeclared identifier 'sum'
<kernel>:2:15875: error: use of undeclared identifier 'sum'
<kernel>:2:15880: error: expected expression
Environment and Context
-
Physical hardware you are using:
AMD Ryzen 3700X
NVIDIA GeForce RTX 3090 -
Operating System:
Windows 11 -
SDK version:
cmake version 3.27.0-rc3
MSVC 19.35.32217.1
clblast 1.5.2
opencl 2.2
Steps to Reproduce
- Download clblast and opencl via vcpkg
- Build and run llama.cpp as described in the readme
CMAKE Log
cmake .. -DLLAMA_CLBLAST=ON -DCLBlast_dir=C:\Users\lkreu\vcpkg\packages\clblast_x64-windows
-- Building for: Visual Studio 17 2022
-- Selecting Windows SDK version 10.0.22000.0 to target Windows 10.0.22621.
-- The C compiler identification is MSVC 19.35.32217.1
-- The CXX compiler identification is MSVC 19.35.32217.1
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.35.32215/bin/Hostx64/x64/cl.exe - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: C:/Program Files/Microsoft Visual Studio/2022/Community/VC/Tools/MSVC/14.35.32215/bin/Hostx64/x64/cl.exe - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- Found Git: C:/Program Files/Git/cmd/git.exe (found version "2.40.1.windows.1")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Failed
-- Looking for pthread_create in pthreads
-- Looking for pthread_create in pthreads - not found
-- Looking for pthread_create in pthread
-- Looking for pthread_create in pthread - not found
-- Found Threads: TRUE
-- CLBlast found
-- CMAKE_SYSTEM_PROCESSOR: AMD64
-- x86 detected
-- Configuring done (5.2s)
-- Generating done (0.2s)
CMake Warning:
Manually-specified variables were not used by the project:
CLBlast_dir
-- Build files have been written to: F:/Github/llama.cpp/build
Build Log
cmake --build . --config Release
MSBuild version 17.5.1+f6fdcf537 for .NET Framework
ggml-opencl.cpp
ggml.vcxproj -> F:\Github\llama.cpp\build\ggml.dir\Release\ggml.lib
Building Custom Rule F:/Github/llama.cpp/CMakeLists.txt
llama.cpp
llama.vcxproj -> F:\Github\llama.cpp\build\Release\llama.lib
Building Custom Rule F:/Github/llama.cpp/examples/CMakeLists.txt
common.cpp
common.vcxproj -> F:\Github\llama.cpp\build\examples\common.dir\Release\common.lib
Building Custom Rule F:/Github/llama.cpp/examples/baby-llama/CMakeLists.txt
baby-llama.cpp
baby-llama.vcxproj -> F:\Github\llama.cpp\build\bin\Release\baby-llama.exe
Building Custom Rule F:/Github/llama.cpp/examples/benchmark/CMakeLists.txt
benchmark-matmult.cpp
benchmark.vcxproj -> F:\Github\llama.cpp\build\bin\Release\benchmark.exe
Building Custom Rule F:/Github/llama.cpp/examples/embd-input/CMakeLists.txt
embd-input-lib.cpp
F:\Github\llama.cpp\examples\embd-input\embd-input-lib.cpp(33,27): warning C4244: '=': conversion from 'time_t' to 'uint32_t', possible loss of data [F:\Github\llama.cpp\build\examples\embd-input\embdinput.vcxproj]
embdinput.vcxproj -> F:\Github\llama.cpp\build\examples\embd-input\Release\embdinput.lib
Building Custom Rule F:/Github/llama.cpp/examples/embd-input/CMakeLists.txt
embd-input-test.cpp
embd-input-test.vcxproj -> F:\Github\llama.cpp\build\bin\Release\embd-input-test.exe
Building Custom Rule F:/Github/llama.cpp/examples/embedding/CMakeLists.txt
embedding.cpp
embedding.vcxproj -> F:\Github\llama.cpp\build\bin\Release\embedding.exe
Building Custom Rule F:/Github/llama.cpp/CMakeLists.txt
ggml_static.vcxproj -> F:\Github\llama.cpp\build\Release\ggml_static.lib
Building Custom Rule F:/Github/llama.cpp/examples/main/CMakeLists.txt
main.cpp
main.vcxproj -> F:\Github\llama.cpp\build\bin\Release\main.exe
Building Custom Rule F:/Github/llama.cpp/examples/perplexity/CMakeLists.txt
perplexity.cpp
perplexity.vcxproj -> F:\Github\llama.cpp\build\bin\Release\perplexity.exe
Building Custom Rule F:/Github/llama.cpp/pocs/vdot/CMakeLists.txt
q8dot.cpp
q8dot.vcxproj -> F:\Github\llama.cpp\build\bin\Release\q8dot.exe
Building Custom Rule F:/Github/llama.cpp/examples/quantize/CMakeLists.txt
quantize.cpp
quantize.vcxproj -> F:\Github\llama.cpp\build\bin\Release\quantize.exe
Building Custom Rule F:/Github/llama.cpp/examples/quantize-stats/CMakeLists.txt
quantize-stats.cpp
quantize-stats.vcxproj -> F:\Github\llama.cpp\build\bin\Release\quantize-stats.exe
Building Custom Rule F:/Github/llama.cpp/examples/save-load-state/CMakeLists.txt
save-load-state.cpp
save-load-state.vcxproj -> F:\Github\llama.cpp\build\bin\Release\save-load-state.exe
Building Custom Rule F:/Github/llama.cpp/examples/simple/CMakeLists.txt
simple.cpp
F:\Github\llama.cpp\examples\simple\simple.cpp(126): warning C4267: 'argument': conversion from 'size_t' to 'int', possible loss of data [F:\Github\llama.cpp\build\examples\simple\simple.vcxproj]
simple.vcxproj -> F:\Github\llama.cpp\build\bin\Release\simple.exe
Building Custom Rule F:/Github/llama.cpp/tests/CMakeLists.txt
test-quantize-fns.cpp
test-quantize-fns.vcxproj -> F:\Github\llama.cpp\build\bin\Release\test-quantize-fns.exe
Building Custom Rule F:/Github/llama.cpp/tests/CMakeLists.txt
test-quantize-perf.cpp
test-quantize-perf.vcxproj -> F:\Github\llama.cpp\build\bin\Release\test-quantize-perf.exe
Building Custom Rule F:/Github/llama.cpp/tests/CMakeLists.txt
test-sampling.cpp
test-sampling.vcxproj -> F:\Github\llama.cpp\build\bin\Release\test-sampling.exe
Building Custom Rule F:/Github/llama.cpp/tests/CMakeLists.txt
test-tokenizer-0.cpp
test-tokenizer-0.vcxproj -> F:\Github\llama.cpp\build\bin\Release\test-tokenizer-0.exe
Building Custom Rule F:/Github/llama.cpp/examples/train-text-from-scratch/CMakeLists.txt
train-text-from-scratch.cpp
train-text-from-scratch.vcxproj -> F:\Github\llama.cpp\build\bin\Release\train-text-from-scratch.exe
Building Custom Rule F:/Github/llama.cpp/pocs/vdot/CMakeLists.txt
vdot.cpp
vdot.vcxproj -> F:\Github\llama.cpp\build\bin\Release\vdot.exe
Building Custom Rule F:/Github/llama.cpp/CMakeLists.txt
Metadata
Metadata
Assignees
Labels
No labels