Skip to content

llama_cpp/lib/libllama.so: undefined symbol: llama_kv_cache_view_init #2026

Open
@opsec-ai

Description

@opsec-ai

Prerequisites

Just built with Python12 in fresh .venv

Please answer the following questions for yourself before submitting an issue.

  • [ x] I am running the latest code. Development is very rapid so there are no tagged versions as of now.
  • [x ] I carefully followed the README.md.
  • [ x] I searched using keywords relevant to my issue to make sure that I am creating a new issue that is not already open (or closed).
  • [ x] I reviewed the Discussions, and have a new bug or useful enhancement to share.

Expected Behavior

from llama_cpp import Llama
(success!)

Current Behavior

from llama_cpp import Llama
Traceback (most recent call last):
File "", line 1, in
File "/home/k/Downloads/src/chatterbox/.venv/lib64/python3.12/site-packages/llama_cpp/init.py", line 1, in
from .llama_cpp import *
File "/home/k/Downloads/src/chatterbox/.venv/lib64/python3.12/site-packages/llama_cpp/llama_cpp.py", line 1824, in
@ctypes_function(
^^^^^^^^^^^^^^^^
File "/home/k/Downloads/src/chatterbox/.venv/lib64/python3.12/site-packages/llama_cpp/_ctypes_extensions.py", line 113, in decorator
func = getattr(lib, name)
^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.12/ctypes/init.py", line 392, in getattr
func = self.getitem(name)
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib64/python3.12/ctypes/init.py", line 397, in getitem
func = self._FuncPtr((name_or_ordinal, self))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: /home/k/Downloads/src/chatterbox/.venv/lib64/python3.12/site-packages/llama_cpp/lib/libllama.so: undefined symbol: llama_kv_cache_view_init

Environment and Context

$ sysinfo
CPU: quad core Intel Core i7-2860QM (-MT MCP-) speed/min/max: 912/800/3600 MHz
Kernel: 6.14.8-300.fc42.x86_64 x86_64 Up: 1d 12h 46m
Mem: 5.05/31.29 GiB (16.1%) Storage: 1.86 TiB (30.5% used) Procs: 376
Shell: Bash inxi: 3.3.38
Graphics:
Device-1: NVIDIA GM204GLM [Quadro M3000M] driver: nvidia v: 570.153.02
Display: x11 server: X.Org v: 21.1.16 with: Xwayland v: 24.1.6 driver: X:
loaded: nvidia gpu: nvidia,nvidia-nvswitch resolution: 1: 1920x108060Hz
2: 1920x1080
60Hz 3: 1366x768~60Hz
API: OpenGL v: 4.6.0 vendor: nvidia v: 570.153.02
renderer: Quadro M3000M/PCIe/SSE2
API: EGL Message: EGL data requires eglinfo. Check --recommends.
Info: Tools: api: glxinfo de: kscreen-doctor
gpu: nvidia-settings,nvidia-smi wl: kanshi,wlr-randr x11: xdriinfo,
xdpyinfo, xprop, xrandr

  • Operating System, e.g. for Linux: Fedora 42

$ uname -a

Linux k 6.14.8-300.fc42.x86_64 #1 SMP PREEMPT_DYNAMIC Thu May 22 19:26:02 UTC 2025 x86_64 GNU/Linux

  • SDK version, e.g. for Linux:
$ python3 --version
Python 3.12.10

$ make --version
GNU Make 4.4.1
$ g++ --version
g++-13 (GCC) 13.3.1 20240611 (Red Hat 13.3.1-2)

Failure Information (for bugs)

Please help provide information about the failure if this is a bug. If it is not a bug, please remove the rest of this template.

Steps to Reproduce

Please provide detailed steps for reproducing the issue. We are not sitting in front of your screen, so the more detail the better.

git pull
git submodule update --remote vendor/llama.cpp
CC=gcc-13 CXX=g++-13 FORCE_CMAKE=1 CMAKE_BUILD_PARALLEL_LEVEL=7
CMAKE_ARGS="-DGGML_CUDA=on
-DCMAKE_CUDA_FLAGS_RELEASE=-Wno-deprecated-gpu-targets
-DLLAVA_BUILD=OFF"
pip install .[server] --upgrade --force-reinstall --no-cache-dir

Note: Many issues seem to be regarding functional or performance issues / differences with llama.cpp. In these cases we need to confirm that you're comparing against the version of llama.cpp that was built with your python package, and which parameters you're passing to the context.

Try the following:

  1. git clone https://github.com/abetlen/llama-cpp-python
  2. cd llama-cpp-python
  3. rm -rf _skbuild/ # delete any old builds
  4. python -m pip install .
  5. cd ./vendor/llama.cpp
  6. Follow llama.cpp's instructions to cmake llama.cpp
  7. Run llama.cpp's ./main with the same arguments you previously passed to llama-cpp-python and see if you can reproduce the issue. If you can, log an issue with llama.cpp

llama-cli -ngl 20 -m deepseek-r1-0528-qwen3-8b-q2_k.gguf -i
Hi there

My name is deepseek-r1. How can I help you?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions