You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We follow the guideline to setup IPEX-LLM[CPP] and also Intel OneAPI. but when we load Phi3 model to Intel GPU, it always have "SYCL error" happened. and we also use main.exe to load model manually, same error.
I attached ollama and llama.cpp log files: llama.cpp_log.txt ollama_log.txt
I'm not sure is there is any environment issue for my setup.
The text was updated successfully, but these errors were encountered:
Hi @soulyet , Q4_K has some error at your current machine, we may try to fix it later. But for now, I think you can try Q4_0 first, which should work.
Thanks for reply.
Now I use Q4_0 which download from https://huggingface.co/SanctumAI/Phi-3-mini-4k-instruct-GGUF/tree/main for try, but met another error:
GGML_ASSERT: C:\Users\Administrator\actions-runner\cpp-release_work\llm.cpp\llm.cpp\ollama-internal\llm\llama.cpp\ggml-backend.c:100: base != NULL && "backend buffer base cannot be NULL"
We follow the guideline to setup IPEX-LLM[CPP] and also Intel OneAPI. but when we load Phi3 model to Intel GPU, it always have "SYCL error" happened. and we also use main.exe to load model manually, same error.
I attached ollama and llama.cpp log files:
llama.cpp_log.txt
ollama_log.txt
I'm not sure is there is any environment issue for my setup.
The text was updated successfully, but these errors were encountered: