Skip to content

CUDA error #2258

Closed
Closed
@MathiasSchindler

Description

@MathiasSchindler

When using whisper.cpp with CUDA compilation, the model starts as usual but crashes after a brief moment. Using the -ng flag to disable the GPU, the model works with the expected CPU speed.

`mathias@mathias-b650:~/whisper.cpp$ ./main -m models/ggml-large-v3.bin samples/bundestag-svea.wav
whisper_init_from_file_with_params_no_state: loading model from 'models/ggml-large-v3.bin'
whisper_init_with_params_no_state: use gpu = 1
whisper_init_with_params_no_state: flash attn = 0
whisper_init_with_params_no_state: gpu_device = 0
whisper_init_with_params_no_state: dtw = 0
whisper_model_load: loading model
whisper_model_load: n_vocab = 51866
whisper_model_load: n_audio_ctx = 1500
whisper_model_load: n_audio_state = 1280
whisper_model_load: n_audio_head = 20
whisper_model_load: n_audio_layer = 32
whisper_model_load: n_text_ctx = 448
whisper_model_load: n_text_state = 1280
whisper_model_load: n_text_head = 20
whisper_model_load: n_text_layer = 32
whisper_model_load: n_mels = 128
whisper_model_load: ftype = 1
whisper_model_load: qntvr = 0
whisper_model_load: type = 5 (large v3)
whisper_model_load: adding 1609 extra tokens
whisper_model_load: n_langs = 100
ggml_cuda_init: GGML_CUDA_FORCE_MMQ: no
ggml_cuda_init: CUDA_USE_TENSOR_CORES: yes
ggml_cuda_init: found 1 CUDA devices:
Device 0: NVIDIA GeForce RTX 4070 Ti SUPER, compute capability 8.9, VMM: yes
whisper_model_load: CUDA0 total size = 3094.36 MB
whisper_model_load: model size = 3094.36 MB
whisper_backend_init_gpu: using CUDA backend
whisper_mel_init: n_len = 3001, n_len_org = 1, n_mel = 128
whisper_mel_init: n_len = 6000, n_len_org = 6000, n_mel = 128
whisper_init_state: kv self size = 251.66 MB
whisper_init_state: kv cross size = 251.66 MB
whisper_init_state: kv pad size = 7.86 MB
whisper_init_state: compute buffer (conv) = 36.26 MB
whisper_init_state: compute buffer (encode) = 926.66 MB
whisper_init_state: compute buffer (cross) = 9.38 MB
whisper_init_state: compute buffer (decode) = 215.95 MB

system_info: n_threads = 4 / 24 | AVX = 1 | AVX2 = 1 | AVX512 = 1 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | METAL = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | CUDA = 1 | COREML = 0 | OPENVINO = 0

main: processing 'samples/bundestag-svea.wav' (108898656 samples, 6806.2 sec), 4 threads, 1 processors, 5 beams + best of 5, lang = en, task = transcribe, timestamps = 1 ...

whisper_mel_init: n_len = 683616, n_len_org = 680616, n_mel = 128

[00:00:00.240 --> 00:00:10.160] So, I welcome everyone to our 57th session of the Digital Committee and to the public hearing.
[00:00:10.160 --> 00:00:14.920] Today we have set up a single agenda item.
[00:00:14.920 --> 00:00:19.480] I'll start with, this is now just a formality for those who are sitting.
[00:00:19.480 --> 00:00:25.360] So it's about the federal government's bill, namely the draft of a bill for the implementation
CUDA error: invalid argument
current device: 0, in function ggml_backend_cuda_graph_compute at ggml-cuda.cu:2689
cudaGraphKernelNodeSetParams(cuda_ctx->cuda_graph->nodes[i], &cuda_ctx->cuda_graph->params[i])
GGML_ASSERT: ggml-cuda.cu:100: !"CUDA error"
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Vorgang nicht zulässig.
No stack.
The program is not being run.
Abgebrochen (Speicherabzug geschrieben)
`

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions