Skip to content

Misc. bug: llama-bench SEGFAULTS w/ SYCL/HIP backend, however llama-cli seems to work #10850

Closed
@lhl

Description

@lhl

Name and Version

❯ build/bin/llama-cli --version
ggml_sycl_init: GGML_SYCL_FORCE_MMQ: no
ggml_sycl_init: SYCL_USE_XMX: yes
ggml_sycl_init: found 1 SYCL devices:
version: 4334 (4ddd199)
built with Intel(R) oneAPI DPC++/C++ Compiler 2025.0.0 (2025.0.0.20241008) for x86_64-unknown-linux-gnu

Operating systems

Linux

Which llama.cpp modules do you know to be affected?

llama-bench

Problem description & steps to reproduce

I have built with the SYCL backend w/ AMD HIP support using (mostly) the build docs (PR coming for some fixes).

When I try to run llama-bench I get a segfault after calling ggml_sycl_rms_norm:

❯ GGML_SYCL_DEBUG=1 build/bin/llama-bench -m /models/gguf/llama-2-7b.Q4_0.gguf
ggml_sycl_init: GGML_SYCL_FORCE_MMQ:   no
ggml_sycl_init: SYCL_USE_XMX: yes
ggml_sycl_init: found 1 SYCL devices:
| model                          |       size |     params | backend    | ngl |          test |                  t/s |
| ------------------------------ | ---------: | ---------: | ---------- | --: | ------------: | -------------------: |
[SYCL] call ggml_backend_sycl_print_sycl_devices
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_host_buffer_type
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_host_buffer_type
call ggml_sycl_rms_norm
call ggml_sycl_rms_norm done
zsh: segmentation fault (core dumped)  GGML_SYCL_DEBUG=1 build/bin/llama-bench -m /models/gguf/llama-2-7b.Q4_0.gguf

Note, when I run llama-cli it runs, so the build is at least somewhat working:

❯ GGML_SYCL_DEBUG=1 build/bin/llama-cli -m /models/gguf/llama-2-7b.Q4_0.gguf -n 128
ggml_sycl_init: GGML_SYCL_FORCE_MMQ:   no
ggml_sycl_init: SYCL_USE_XMX: yes
ggml_sycl_init: found 1 SYCL devices:
build: 4334 (4ddd199f) with Intel(R) oneAPI DPC++/C++ Compiler 2025.0.0 (2025.0.0.20241008) for x86_64-unknown-linux-gnu
main: llama backend init
main: load the model and apply lora adapter, if any
llama_load_model_from_file: using device SYCL0 (AMD Radeon Pro W7900) - 45864 MiB free
llama_model_loader: loaded meta data with 19 key-value pairs and 291 tensors from /models/gguf/llama-2-7b.Q4_0.gguf (version GGUF V2)
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = llama
llama_model_loader: - kv   1:                               general.name str              = LLaMA v2
llama_model_loader: - kv   2:                       llama.context_length u32              = 4096
llama_model_loader: - kv   3:                     llama.embedding_length u32              = 4096
llama_model_loader: - kv   4:                          llama.block_count u32              = 32
llama_model_loader: - kv   5:                  llama.feed_forward_length u32              = 11008
llama_model_loader: - kv   6:                 llama.rope.dimension_count u32              = 128
llama_model_loader: - kv   7:                 llama.attention.head_count u32              = 32
llama_model_loader: - kv   8:              llama.attention.head_count_kv u32              = 32
llama_model_loader: - kv   9:     llama.attention.layer_norm_rms_epsilon f32              = 0.000010
llama_model_loader: - kv  10:                          general.file_type u32              = 2
llama_model_loader: - kv  11:                       tokenizer.ggml.model str              = llama
llama_model_loader: - kv  12:                      tokenizer.ggml.tokens arr[str,32000]   = ["<unk>", "<s>", "</s>", "<0x00>", "<...
llama_model_loader: - kv  13:                      tokenizer.ggml.scores arr[f32,32000]   = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  14:                  tokenizer.ggml.token_type arr[i32,32000]   = [2, 3, 3, 6, 6, 6, 6, 6, 6, 6, 6, 6, ...
llama_model_loader: - kv  15:                tokenizer.ggml.bos_token_id u32              = 1
llama_model_loader: - kv  16:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  17:            tokenizer.ggml.unknown_token_id u32              = 0
llama_model_loader: - kv  18:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:   65 tensors
llama_model_loader: - type q4_0:  225 tensors
llama_model_loader: - type q6_K:    1 tensors
llm_load_vocab: special_eos_id is not in special_eog_ids - the tokenizer config may be incorrect
llm_load_vocab: special tokens cache size = 3
llm_load_vocab: token to piece cache size = 0.1684 MB
llm_load_print_meta: format           = GGUF V2
llm_load_print_meta: arch             = llama
llm_load_print_meta: vocab type       = SPM
llm_load_print_meta: n_vocab          = 32000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 4096
llm_load_print_meta: n_embd           = 4096
llm_load_print_meta: n_layer          = 32
llm_load_print_meta: n_head           = 32
llm_load_print_meta: n_head_kv        = 32
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 4096
llm_load_print_meta: n_embd_v_gqa     = 4096
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-05
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 11008
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 0
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 4096
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 7B
llm_load_print_meta: model ftype      = Q4_0
llm_load_print_meta: model params     = 6.74 B
llm_load_print_meta: model size       = 3.56 GiB (4.54 BPW)
llm_load_print_meta: general.name     = LLaMA v2
llm_load_print_meta: BOS token        = 1 '<s>'
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 0 '<unk>'
llm_load_print_meta: LF token         = 13 '<0x0A>'
llm_load_print_meta: EOG token        = 2 '</s>'
llm_load_print_meta: max token length = 48
llm_load_tensors: offloading 0 repeating layers to GPU
llm_load_tensors: offloaded 0/33 layers to GPU
llm_load_tensors:   CPU_Mapped model buffer size =  3647.87 MiB
llm_load_tensors:  CPU_AARCH64 model buffer size =  3474.00 MiB
..................................................................................................
llama_new_context_with_model: n_seq_max     = 1
llama_new_context_with_model: n_ctx         = 4096
llama_new_context_with_model: n_ctx_per_seq = 4096
llama_new_context_with_model: n_batch       = 2048
llama_new_context_with_model: n_ubatch      = 512
llama_new_context_with_model: flash_attn    = 0
llama_new_context_with_model: freq_base     = 10000.0
llama_new_context_with_model: freq_scale    = 1
[SYCL] call ggml_check_sycl
ggml_check_sycl: GGML_SYCL_DEBUG: 1
ggml_check_sycl: GGML_SYCL_F16: no
[SYCL] call ggml_backend_sycl_print_sycl_devices
Found 1 SYCL devices:
|  |                   |                                       |       |Max    |        |Max  |Global |                     |
|  |                   |                                       |       |compute|Max work|sub  |mem    |                     |
|ID|        Device Type|                                   Name|Version|units  |group   |group|size   |       Driver version|
|--|-------------------|---------------------------------------|-------|-------|--------|-----|-------|---------------------|
| 0|        [hip:gpu:0]|                   AMD Radeon Pro W7900| 1100.0|     48|    1024|   32| 48301M|         HIP 60342.13|
[SYCL] call ggml_backend_sycl_buffer_type
[SYCL] call ggml_backend_sycl_get_device_count
[SYCL] call ggml_backend_sycl_host_buffer_type
llama_kv_cache_init:        CPU KV buffer size =  2048.00 MiB
llama_new_context_with_model: KV self size  = 2048.00 MiB, K (f16): 1024.00 MiB, V (f16): 1024.00 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.12 MiB
llama_new_context_with_model:      SYCL0 compute buffer size =   353.00 MiB
llama_new_context_with_model:  SYCL_Host compute buffer size =    24.01 MiB
llama_new_context_with_model: graph nodes  = 1030
llama_new_context_with_model: graph splits = 356 (with bs=512), 1 (with bs=1)
common_init_from_params: warming up the model with an empty run - please wait ... (--no-warmup to disable)
main: llama threadpool init, n_threads = 24

system_info: n_threads = 24 (n_threads_batch = 24) / 48 | CPU : SSE3 = 1 | SSSE3 = 1 | AVX = 1 | AVX2 = 1 | F16C = 1 | FMA = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | LLAMAFILE = 1 | OPENMP = 1 | AARCH64_REPACK = 1 |

sampler seed: 83597731
sampler params:
        repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
        dry_multiplier = 0.000, dry_base = 1.750, dry_allowed_length = 2, dry_penalty_last_n = -1
        top_k = 40, top_p = 0.950, min_p = 0.050, xtc_probability = 0.000, xtc_threshold = 0.100, typical_p = 1.000, temp = 0.800
        mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampler chain: logits -> logit-bias -> penalties -> dry -> top-k -> typical -> top-p -> min-p -> xtc -> temp-ext -> dist
generate: n_ctx = 4096, n_batch = 2048, n_predict = 128, n_keep = 1

 everybody, I've got a new post up on my other blog, if anybody wants to read it. Hinweis: Das ist auf Deutsch.
So, I'm on this plane, and I look around, and everybody around me is either reading a newspaper or listening to their iPod. The newspaper is a bit of an issue for me, since I can't read them. I do like to read the "Globe and Mail" though, it's a pretty good paper.
Anyway, it makes me wonder how many people do things because it's what they've always done, and that

llama_perf_sampler_print:    sampling time =       4.12 ms /   129 runs   (    0.03 ms per token, 31287.90 tokens per second)
llama_perf_context_print:        load time =    1034.34 ms
llama_perf_context_print: prompt eval time =       0.00 ms /     1 tokens (    0.00 ms per token,      inf tokens per second)
llama_perf_context_print:        eval time =    3771.10 ms /   128 runs   (   29.46 ms per token,    33.94 tokens per second)
llama_perf_context_print:       total time =    3779.83 ms /   129 tokens

First Bad Commit

No response

Relevant log output

No response

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions