-
Notifications
You must be signed in to change notification settings - Fork 1
Description
When i run_whisper, it fails with error log below. Can you help take a look? much thanks
==================
In [6]: run_whisper(data = d, libname = so_p, fname_model =md_p, language=b'en')
whisper_init_from_file_with_params_no_state: loading model from './base.en.bin'
whisper_model_load: loading model
whisper_model_load: n_vocab = 51864
whisper_model_load: n_audio_ctx = 1500
whisper_model_load: n_audio_state = 512
whisper_model_load: n_audio_head = 8
whisper_model_load: n_audio_layer = 6
whisper_model_load: n_text_ctx = 448
whisper_model_load: n_text_state = 512
whisper_model_load: n_text_head = 8
whisper_model_load: n_text_layer = 6
whisper_model_load: n_mels = 80
whisper_model_load: ftype = 1
whisper_model_load: qntvr = 0
whisper_model_load: type = 2 (base)
whisper_model_load: adding 1607 extra tokens
whisper_model_load: n_langs = 99
whisper_backend_init: using Metal backend
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1 Pro
ggml_metal_init: picking default device: Apple M1 Pro
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: loading 'ggml-metal.metal'
ggml_metal_init: GPU name: Apple M1 Pro
ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007)
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB
ggml_metal_init: maxTransferRate = built-in GPU
ggml_metal_add_buffer: allocated 'backend ' buffer, size = 156.68 MB, ( 157.20 / 11453.25)
whisper_model_load: Metal buffer size = 156.67 MB
whisper_model_load: model size = 156.58 MB
whisper_backend_init: using Metal backend
ggml_metal_init: allocating
ggml_metal_init: found device: Apple M1 Pro
ggml_metal_init: picking default device: Apple M1 Pro
ggml_metal_init: default.metallib not found, loading from source
ggml_metal_init: loading './ggml-metal.metal'
ggml_metal_init: GPU name: Apple M1 Pro
ggml_metal_init: GPU family: MTLGPUFamilyApple7 (1007)
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: recommendedMaxWorkingSetSize = 11453.25 MB
ggml_metal_init: maxTransferRate = built-in GPU
ggml_metal_add_buffer: allocated 'backend ' buffer, size = 16.52 MB, ( 173.72 / 11453.25)
whisper_init_state: kv self size = 16.52 MB
ggml_metal_add_buffer: allocated 'backend ' buffer, size = 18.43 MB, ( 192.15 / 11453.25)
whisper_init_state: kv cross size = 18.43 MB
ggml_metal_add_buffer: allocated 'backend ' buffer, size = 0.02 MB, ( 192.17 / 11453.25)
whisper_init_state: compute buffer (conv) = 14.79 MB
ggml_metal_add_buffer: allocated 'backend ' buffer, size = 0.02 MB, ( 192.18 / 11453.25)
whisper_init_state: compute buffer (encode) = 85.93 MB
ggml_metal_add_buffer: allocated 'backend ' buffer, size = 0.02 MB, ( 192.20 / 11453.25)
whisper_init_state: compute buffer (cross) = 4.71 MB
ggml_metal_add_buffer: allocated 'backend ' buffer, size = 0.02 MB, ( 192.22 / 11453.25)
whisper_init_state: compute buffer (decode) = 96.41 MB
ggml_metal_add_buffer: allocated 'backend ' buffer, size = 13.16 MB, ( 205.37 / 11453.25)
ggml_metal_add_buffer: allocated 'backend ' buffer, size = 84.30 MB, ( 289.67 / 11453.25)
ggml_metal_add_buffer: allocated 'backend ' buffer, size = 3.08 MB, ( 292.75 / 11453.25)
ggml_metal_add_buffer: allocated 'backend ' buffer, size = 94.78 MB, ( 387.53 / 11453.25)
[1] 23842 segmentation fault ipython