You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
I don't have a GPU, and I would like to use the intel iGPU.
I tried to follow #504 with only a docker compose, but I have issues when:
building: cannot find -lOpenCL and cannot find -lclblast
To Reproduce
I already tried with just running localai/localai:v2.6.0-ffmpeg-core as a standalone container, but ended up to compose + dockerfile for reproducability.
# DockerfileFROM localai/localai:v2.6.0-ffmpeg-core
VOLUME /build/models
ENV LLAMA_CLBLAST=1
ENV DEBUG=true
ENV BUILD_TYPE=clblas
RUN apt install -y jq && \
mkdir -p /tmp/neo && cd /tmp/neo && \
for debUrl in $(curl -s https://api.github.com/repos/intel/intel-graphics-compiler/releases/latest | jq '.assets[].browser_download_url' -r | grep -E '.dd?eb$'); do \
wget $debUrl; \
done; \
for debUrl in $(curl -s https://api.github.com/repos/intel/compute-runtime/releases/latest | jq '.assets[].browser_download_url' -r | grep -E '.dd?eb$'); do \
wget $debUrl; \
done; \
dpkg -i *.deb && \
cd ~ && \
rm -rf /tmp/neo
RUN make build
CMD mistral-openorca
Expected behavior
As said here #504 , when calling the api, it should print ggml_opencl: selecting platform: 'Intel(R) OpenCL HD Graphics
Logs docker compose up -d --build logs:
0.290 go mod edit -replace github.com/nomic-ai/gpt4all/gpt4all-bindings/golang=/build/sources/gpt4all/gpt4all-bindings/golang
0.334 go mod edit -replace github.com/go-skynet/go-ggml-transformers.cpp=/build/sources/go-ggml-transformers
0.342 go mod edit -replace github.com/donomii/go-rwkv.cpp=/build/sources/go-rwkv
0.351 go mod edit -replace github.com/ggerganov/whisper.cpp=/build/sources/whisper.cpp
0.359 go mod edit -replace github.com/ggerganov/whisper.cpp/bindings/go=/build/sources/whisper.cpp/bindings/go
0.367 go mod edit -replace github.com/go-skynet/go-bert.cpp=/build/sources/go-bert
0.375 go mod edit -replace github.com/mudler/go-stable-diffusion=/build/sources/go-stable-diffusion
0.383 go mod edit -replace github.com/M0Rf30/go-tiny-dream=/build/sources/go-tiny-dream
0.391 go mod edit -replace github.com/mudler/go-piper=/build/sources/go-piper
0.400 go mod download
0.489 touch prepare-sources
0.490 touch prepare
0.515 go build -ldflags "-X "github.com/go-skynet/LocalAI/internal.Version=v2.6.0" -X "github.com/go-skynet/LocalAI/internal.Commit=06cd9ef98d898818766fec8aa630fe9fa676f6da"" -tags "" -o backend-assets/grpc/langchain-huggingface ./backend/go/llm/langchain/
16.59 CGO_LDFLAGS="-lOpenCL -lclblast" C_INCLUDE_PATH=/build/sources/go-ggml-transformers LIBRARY_PATH=/build/sources/go-ggml-transformers \
16.59 go build -ldflags "-X "github.com/go-skynet/LocalAI/internal.Version=v2.6.0" -X "github.com/go-skynet/LocalAI/internal.Commit=06cd9ef98d898818766fec8aa630fe9fa676f6da"" -tags "" -o backend-assets/grpc/falcon-ggml ./backend/go/llm/falcon-ggml/
42.91 # github.com/go-skynet/LocalAI/backend/go/llm/falcon-ggml
42.91 /usr/local/go/pkg/tool/linux_amd64/link: running g++ failed: exit status 1
42.91 /usr/bin/ld: cannot find -lOpenCL
42.91 /usr/bin/ld: cannot find -lclblast
42.91 collect2: error: ld returned 1 exit status
42.91
42.92 make: *** [Makefile:533: backend-assets/grpc/falcon-ggml] Error 1
------
failed to solve: process "/bin/sh -c make build" did not complete successfully: exit code: 2
Additional context
Do I have to change the backend ? I've found this ggerganov/ggml#217 saying llama.cpp not working with clblast
The text was updated successfully, but these errors were encountered:
LocalAI version:
localai/localai:v2.6.0-ffmpeg-core
andlocalai/localai:v2.6.0
Environment, CPU architecture, OS, and Version:
Linux homelab 6.1.0-17-amd64 #1 SMP PREEMPT_DYNAMIC Debian 6.1.69-1 (2023-12-30) x86_64 GNU/Linux
Describe the bug
I don't have a GPU, and I would like to use the intel iGPU.
I tried to follow #504 with only a docker compose, but I have issues when:
cannot find -lOpenCL
andcannot find -lclblast
To Reproduce
I already tried with just running
localai/localai:v2.6.0-ffmpeg-core
as a standalone container, but ended up to compose + dockerfile for reproducability.Expected behavior
As said here #504 , when calling the api, it should print
ggml_opencl: selecting platform: 'Intel(R) OpenCL HD Graphics
Logs
docker compose up -d --build
logs:Additional context
Do I have to change the backend ? I've found this ggerganov/ggml#217 saying
llama.cpp
not working withclblast
The text was updated successfully, but these errors were encountered: