Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compilation with CLBlast in onboard graphics #2676

Closed
SilvaRaulEnriqueCJM opened this issue Aug 19, 2023 · 7 comments
Closed

Compilation with CLBlast in onboard graphics #2676

SilvaRaulEnriqueCJM opened this issue Aug 19, 2023 · 7 comments
Labels

Comments

@SilvaRaulEnriqueCJM
Copy link

In order to compile with CLBlast:
make LLAMA_CLBLAST=1

say error:

g++ --shared -I. -I./examples -O3 -std=c++11 -fPIC -DNDEBUG -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -Wno-multichar -pthread -march=native -mtune=native -DGGML_USE_K_QUANTS -DGGML_USE_CLBLAST -I/usr/local/include -I/home/raul/Desktop/Librerias/OpenCL-SDK/install/include examples/embd-input/embd-input-lib.cpp ggml.o llama.o common.o k_quants.o ggml-opencl.o ggml-alloc.o -o libembdinput.so -L/usr/local/lib -lclblast -L/home/raul/Desktop/Librerias/OpenCL-SDK/install/lib -lOpenCL
/usr/bin/ld: /usr/local/lib/libclblast.a(xaxpy.cpp.o): warning: relocation against _ZN7clblast8database11XaxpyDoubleE' in read-only section .text.startup'
/usr/bin/ld: /usr/local/lib/libclblast.a(xdot.cpp.o): relocation R_X86_64_PC32 against symbol `ZTSZN7clblast6BufferISt7complexIdEEC4ERKNS_7ContextENS_12BufferAccessEmEUlPP7_cl_memE' can not be used when making a shared object; recompile con -fPIC
/usr/bin/ld: falló el enlace final: bad value
collect2: error: ld returned 1 exit status
make: *** [Makefile:382: libembdinput.so] Error 1

¿What is the problem?

@SilvaRaulEnriqueCJM SilvaRaulEnriqueCJM changed the title [User] Insert summary of your issue or enhancement.. Compilation with CLBlast in AMD onboard graphics Aug 19, 2023
@MichaelDays
Copy link

Silly suggestion: try running "sudo ldconfig"

The error suggests that the "libclblast.a" library file that is being picked up by g++ was not originally built to be used to link into relocatable object file. The "recompile con -fPIC" option seems to be used to suggest this.

There may already be a version of the ".a" file with a ".so" suffix on disc, but not being picked up by the ld tool's cache. ldconfig will rebuild that cache, and may allow g++ to link against that file instead.

@SlyEcho
Copy link
Collaborator

SlyEcho commented Aug 21, 2023

You need to compile CLBlast with -fPIC

Add it to CMAKE_C_FLAGS and CMAKE_CXX_FLAGS (in the file CMakeCache.txt file in the build directory of CLBlast). Recompile it and reinstall it.

@SilvaRaulEnriqueCJM
Copy link
Author

Now CLBlast compile in AMD with CuDA and run fast. Thakyou.

But if compile in Intel (without graphics card), no errors in compilation time, but in runtime of main say:

main: build = 1015 (226255b)
main: seed = 1692672569
ggml_opencl: clGetPlatformIDs(NPLAT, platform_ids, &n_platforms) error -1001 at ggml-opencl.cpp:965

@SilvaRaulEnriqueCJM SilvaRaulEnriqueCJM changed the title Compilation with CLBlast in AMD onboard graphics Compilation with CLBlast in onboard graphics Aug 22, 2023
@SilvaRaulEnriqueCJM
Copy link
Author

And in another Intel Core I3 (without graphics card), in runtime of main say:

main: build = 1018 (8e4364f)
main: seed = 1692703705
ggml_opencl: selecting platform: 'Portable Computing Language'
ggml_opencl: selecting device: 'pthread-haswell-Intel(R) Core(TM) i3-4100M CPU @ 2.50GHz'
ggml_opencl: warning, not a GPU: 'pthread-haswell-Intel(R) Core(TM) i3-4100M CPU @ 2.50GHz'.
ggml_opencl: device FP16 support: false
gguf_init_from_file: invalid magic number 67676a74
error loading model: llama_model_loader: failed to load model from /media/raul/f3a54992-da3d-4e46-93f6-e84c1180f25d/models/TheBloke/llama-2-7b-chat.ggmlv3.q3_K_L.bin

llama_load_model_from_file: failed to load model
llama_init_from_gpt_params: error: failed to load model '/media/raul/f3a54992-da3d-4e46-93f6-e84c1180f25d/models/TheBloke/llama-2-7b-chat.ggmlv3.q3_K_L.bin'
main: error: unable to load model

@SilvaRaulEnriqueCJM
Copy link
Author

The parameters are:
-m /media/raul/f3a54992-da3d-4e46-93f6-e84c1180f25d/models/TheBloke/llama-2-7b-chat.ggmlv3.q3_K_L.bin -ngl 43 -i --prompt "Sabiendo que hay un celular azul sobre la mesa, ¿cuál letra corresponde a la respuesta correcta?: a) hay un celular azul sobre la mesa, b) hay un celular marrón sobre la mesa, c) hay un celular negro sobre la mesa, d) hay un celular gris sobre la mesa, e) hay un celular verde sobre la mesea. Respuesta: La letra que corresponde a la respuesta correcta es "

@SlyEcho
Copy link
Collaborator

SlyEcho commented Aug 22, 2023

Model format has changed, you need to convert files or download new files: #2398

@github-actions github-actions bot added the stale label Mar 25, 2024
Copy link
Contributor

github-actions bot commented Apr 9, 2024

This issue was closed because it has been inactive for 14 days since being marked as stale.

@github-actions github-actions bot closed this as completed Apr 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants