Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: ggml.c:5278: !ggml_is_transposed(a) #8398

Closed
guinmoon opened this issue Jul 9, 2024 · 11 comments
Closed

Bug: ggml.c:5278: !ggml_is_transposed(a) #8398

guinmoon opened this issue Jul 9, 2024 · 11 comments
Labels
bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)

Comments

@guinmoon
Copy link
Contributor

guinmoon commented Jul 9, 2024

What happened?

For some reason, when I use the llama.cpp code in my project on T5 models, I get this error:

ggml.c:5278: !ggml_is_transposed(a)

At the same time llama-cli built with the same sources works fine.
Who can tell me what the problem is?

Name and Version

Release b3347

What operating system are you seeing the problem on?

No response

Relevant log output

No response

@guinmoon guinmoon added bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches) labels Jul 9, 2024
@fairydreaming
Copy link
Collaborator

fairydreaming commented Jul 9, 2024

For some reason, when I use the llama.cpp code in my project on T5 models, I get this error:

ggml.c:5278: !ggml_is_transposed(a)

At the same time llama-cli built with the same sources works fine. Who can tell me what the problem is?

@guinmoon Not sure if this is the cause, but to use T5 models you have to call some new additional API, see sources of llama-cli for reference:

if (llama_model_has_encoder(model)) {
int enc_input_size = embd_inp.size();
llama_token * enc_input_buf = embd_inp.data();
if (llama_encode(ctx, llama_batch_get_one(enc_input_buf, enc_input_size, 0, 0))) {
LOG_TEE("%s : failed to eval\n", __func__);
return 1;
}
llama_token decoder_start_token_id = llama_model_decoder_start_token(model);
if (decoder_start_token_id == -1) {
decoder_start_token_id = llama_token_bos(model);
}
embd_inp.clear();
embd_inp.push_back(decoder_start_token_id);
}

Basically you have to pass the prompt tokens to llama_encode() call and then use a token returned from llama_model_decoder_start_token() to create a new token sequence passed to llama_decode() call.

@fairydreaming
Copy link
Collaborator

@guinmoon I confirmed that calling llama_decode() for an encoder-decoder model like T5 without a prior llama_encode() call causes the weird error message you mentioned. To avoid confusion I added an assert to point the API users to right direction: #8400

@steampunque
Copy link

Basically you have to pass the prompt tokens to llama_encode() call and then use a token returned from llama_model_decoder_start_token() to create a new token sequence passed to llama_decode() call.

There is none of this code in the server, so I guess server has not been updated to support T5 models yet? I also tried to load madlad400 and got an assert here in the server:

    add_bos_token = llama_should_add_bos_token(model);
    GGML_ASSERT(llama_add_eos_token(model) != 1);

Tried also to run madlad400 with cli but it just responds with blank space for <2de> Hello how are you?

@fairydreaming
Copy link
Collaborator

fairydreaming commented Jul 10, 2024

Basically you have to pass the prompt tokens to llama_encode() call and then use a token returned from llama_model_decoder_start_token() to create a new token sequence passed to llama_decode() call.

There is none of this code in the server, so I guess server has not been updated to support T5 models yet?

That's right.

Tried also to run madlad400 with cli but it just responds with blank space for <2de> Hello how are you?

Works just fine for me (current master):

python3 convert_hf_to_gguf.py /mnt/md0/huggingface/hub/models--google--madlad400-10b-mt/snapshots/9f2797629c31e69617186dbe5f0ca43bf662f36d/ --outfile models/madlad400-10b.gguf --outtype "f32"
./llama-cli -m models/madlad400-10b.gguf -p '<2de> Hello how are you?'
Log start
...
system_info: n_threads = 32 / 64 | AVX = 1 | AVX_VNNI = 0 | AVX2 = 1 | AVX512 = 1 | AVX512_VBMI = 1 | AVX512_VNNI = 1 | AVX512_BF16 = 1 | FMA = 1 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 0 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 1 | 
sampling: 
	repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
	top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order: 
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature 
generate: n_ctx = 512, n_batch = 2048, n_predict = -1, n_keep = 0


▅ Hallo, wie geht es? [end of text]

llama_print_timings:        load time =    1162.90 ms
llama_print_timings:      sample time =       0.66 ms /     8 runs   (    0.08 ms per token, 12139.61 tokens per second)
llama_print_timings: prompt eval time =     449.94 ms /     9 tokens (   49.99 ms per token,    20.00 tokens per second)
llama_print_timings:        eval time =    1679.21 ms /     7 runs   (  239.89 ms per token,     4.17 tokens per second)
llama_print_timings:       total time =    2275.37 ms /    16 tokens
Log end

@fairydreaming
Copy link
Collaborator

@steampunque Let me know what exactly was the model you were trying to run.

@guinmoon
Copy link
Contributor Author

For some reason, when I use the llama.cpp code in my project on T5 models, I get this error:

ggml.c:5278: !ggml_is_transposed(a)

At the same time llama-cli built with the same sources works fine. Who can tell me what the problem is?

@guinmoon Not sure if this is the cause, but to use T5 models you have to call some new additional API, see sources of llama-cli for reference:

if (llama_model_has_encoder(model)) {
int enc_input_size = embd_inp.size();
llama_token * enc_input_buf = embd_inp.data();
if (llama_encode(ctx, llama_batch_get_one(enc_input_buf, enc_input_size, 0, 0))) {
LOG_TEE("%s : failed to eval\n", __func__);
return 1;
}
llama_token decoder_start_token_id = llama_model_decoder_start_token(model);
if (decoder_start_token_id == -1) {
decoder_start_token_id = llama_token_bos(model);
}
embd_inp.clear();
embd_inp.push_back(decoder_start_token_id);
}

Basically you have to pass the prompt tokens to llama_encode() call and then use a token returned from llama_model_decoder_start_token() to create a new token sequence passed to llama_decode() call.

Thank you very much, this is a working solution for me!

@steampunque
Copy link

@steampunque Let me know what exactly was the model you were trying to run.

I tried a Q6_K quant of this model:
madlad400-7b-mt

I think my steps were essentially identical to what you showed in your response. Still may be just some pilot error somewhere on my part, will try again with the 10b you used. Thanks for your reply.

@fairydreaming
Copy link
Collaborator

@steampunque Let me know what exactly was the model you were trying to run.

I tried a Q6_K quant of this model: madlad400-7b-mt

I think my steps were essentially identical to what you showed in your response. Still may be just some pilot error somewhere on my part, will try again with the 10b you used. Thanks for your reply.

@steampunque You can also verify if the 7b model answers your prompt correctly in HF transformers library. Another idea to try is to use f32 model instead of quantized one.

@steampunque
Copy link

@steampunque You can also verify if the 7b model answers your prompt correctly in HF transformers library. Another idea to try is to use f32 model instead of quantized one.

There is some kind of a problem when using interactive mode with cli with this model, it just comes back with spaces:

llama-cli -m madlad400-7b-mt.Q6_K.gguf -ngl 49 --interactive-first
Log start
main: build = 3334 (f7cab35e)
main: built with cc (GCC) 11.2.0 for x86_64-slackware-linux
main: seed  = 1720623995
llama_model_loader: loaded meta data with 26 key-value pairs and 1110 tensors from madlad400-7b-mt.Q6_K.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = t5
llama_model_loader: - kv   1:                               general.name str              = T5
llama_model_loader: - kv   2:                          t5.context_length u32              = 512
llama_model_loader: - kv   3:                        t5.embedding_length u32              = 2048
llama_model_loader: - kv   4:                     t5.feed_forward_length u32              = 8192
llama_model_loader: - kv   5:                             t5.block_count u32              = 48
llama_model_loader: - kv   6:                    t5.attention.head_count u32              = 16
llama_model_loader: - kv   7:                    t5.attention.key_length u32              = 128
llama_model_loader: - kv   8:                  t5.attention.value_length u32              = 128
llama_model_loader: - kv   9:            t5.attention.layer_norm_epsilon f32              = 0.000001
llama_model_loader: - kv  10:        t5.attention.relative_buckets_count u32              = 32
llama_model_loader: - kv  11:        t5.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  12:                  t5.decoder_start_token_id u32              = 0
llama_model_loader: - kv  13:                          general.file_type u32              = 18
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = t5
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,256000]  = ["<unk>", "<s>", "</s>", "\n", "<2ace>...
llama_model_loader: - kv  17:                      tokenizer.ggml.scores arr[f32,256000]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,256000]  = [2, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:            tokenizer.ggml.add_space_prefix bool             = true
llama_model_loader: - kv  20:    tokenizer.ggml.remove_extra_whitespaces bool             = false
llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  22:            tokenizer.ggml.padding_token_id u32              = 1
llama_model_loader: - kv  23:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  24:               tokenizer.ggml.add_eos_token bool             = true
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  242 tensors
llama_model_loader: - type q6_K:  866 tensors
llama_model_loader: - type bf16:    2 tensors
llm_load_vocab: special tokens cache size = 259
llm_load_vocab: token to piece cache size = 1.7509 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = t5
llm_load_print_meta: vocab type       = UGM
llm_load_print_meta: n_vocab          = 256000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 512
llm_load_print_meta: n_embd           = 2048
llm_load_print_meta: n_layer          = 48
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 16
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 2048
llm_load_print_meta: n_embd_v_gqa     = 2048
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 8192
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = -1
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 512
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = Q6_K
llm_load_print_meta: model params     = 8.30 B
llm_load_print_meta: model size       = 6.34 GiB (6.56 BPW) 
llm_load_print_meta: general.name     = T5
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 2 '</s>'
llm_load_print_meta: PAD token        = 1 '<s>'
llm_load_print_meta: LF token         = 805 '▁'
llm_load_print_meta: max token length = 48
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes
llm_load_tensors: ggml ctx size =    0.88 MiB
llm_load_tensors: offloading 48 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 49/49 layers to GPU
llm_load_tensors:        CPU buffer size =  2917.78 MiB
llm_load_tensors:      CUDA0 buffer size =  6082.05 MiB
..........................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   192.00 MiB
llama_new_context_with_model: KV self size  =  192.00 MiB, K (f16):   96.00 MiB, V (f16):   96.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.98 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   508.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    23.00 MiB
llama_new_context_with_model: graph nodes  = 2694
llama_new_context_with_model: graph splits = 98

system_info: n_threads = 4 / 4 | AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | 
main: interactive mode on.
sampling: 
	repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
	top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order: 
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature 
generate: n_ctx = 512, n_batch = 2048, n_predict = -1, n_keep = 0


== Running in interactive mode. ==
 - Press Ctrl+C to interject at any time.
 - Press Return to return control to the AI.
 - To return control without starting a new line, end your input with '/'.
 - If you want to submit another line, end your input with '\'.

<2de> Hello how are you?

<2de> Hello how are you?

If I send in the prompt as in your example it does ouput:

llama-cli -m madlad400-7b-mt.Q6_K.gguf -ngl 49 -p '<2de> Hello how are you?'
Log start
main: build = 3334 (f7cab35e)
main: built with cc (GCC) 11.2.0 for x86_64-slackware-linux
main: seed  = 1720624332
llama_model_loader: loaded meta data with 26 key-value pairs and 1110 tensors from madlad400-7b-mt.Q6_K.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = t5
llama_model_loader: - kv   1:                               general.name str              = T5
llama_model_loader: - kv   2:                          t5.context_length u32              = 512
llama_model_loader: - kv   3:                        t5.embedding_length u32              = 2048
llama_model_loader: - kv   4:                     t5.feed_forward_length u32              = 8192
llama_model_loader: - kv   5:                             t5.block_count u32              = 48
llama_model_loader: - kv   6:                    t5.attention.head_count u32              = 16
llama_model_loader: - kv   7:                    t5.attention.key_length u32              = 128
llama_model_loader: - kv   8:                  t5.attention.value_length u32              = 128
llama_model_loader: - kv   9:            t5.attention.layer_norm_epsilon f32              = 0.000001
llama_model_loader: - kv  10:        t5.attention.relative_buckets_count u32              = 32
llama_model_loader: - kv  11:        t5.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  12:                  t5.decoder_start_token_id u32              = 0
llama_model_loader: - kv  13:                          general.file_type u32              = 18
llama_model_loader: - kv  14:                       tokenizer.ggml.model str              = t5
llama_model_loader: - kv  15:                         tokenizer.ggml.pre str              = default
llama_model_loader: - kv  16:                      tokenizer.ggml.tokens arr[str,256000]  = ["<unk>", "<s>", "</s>", "\n", "<2ace>...
llama_model_loader: - kv  17:                      tokenizer.ggml.scores arr[f32,256000]  = [0.000000, 0.000000, 0.000000, 0.0000...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,256000]  = [2, 3, 3, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:            tokenizer.ggml.add_space_prefix bool             = true
llama_model_loader: - kv  20:    tokenizer.ggml.remove_extra_whitespaces bool             = false
llama_model_loader: - kv  21:                tokenizer.ggml.eos_token_id u32              = 2
llama_model_loader: - kv  22:            tokenizer.ggml.padding_token_id u32              = 1
llama_model_loader: - kv  23:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  24:               tokenizer.ggml.add_eos_token bool             = true
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  242 tensors
llama_model_loader: - type q6_K:  866 tensors
llama_model_loader: - type bf16:    2 tensors
llm_load_vocab: special tokens cache size = 259
llm_load_vocab: token to piece cache size = 1.7509 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = t5
llm_load_print_meta: vocab type       = UGM
llm_load_print_meta: n_vocab          = 256000
llm_load_print_meta: n_merges         = 0
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 512
llm_load_print_meta: n_embd           = 2048
llm_load_print_meta: n_layer          = 48
llm_load_print_meta: n_head           = 16
llm_load_print_meta: n_head_kv        = 16
llm_load_print_meta: n_rot            = 128
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 128
llm_load_print_meta: n_embd_head_v    = 128
llm_load_print_meta: n_gqa            = 1
llm_load_print_meta: n_embd_k_gqa     = 2048
llm_load_print_meta: n_embd_v_gqa     = 2048
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 8192
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = -1
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 10000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 512
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: model type       = ?B
llm_load_print_meta: model ftype      = Q6_K
llm_load_print_meta: model params     = 8.30 B
llm_load_print_meta: model size       = 6.34 GiB (6.56 BPW) 
llm_load_print_meta: general.name     = T5
llm_load_print_meta: EOS token        = 2 '</s>'
llm_load_print_meta: UNK token        = 2 '</s>'
llm_load_print_meta: PAD token        = 1 '<s>'
llm_load_print_meta: LF token         = 805 '▁'
llm_load_print_meta: max token length = 48
ggml_cuda_init: GGML_CUDA_FORCE_MMQ:    no
ggml_cuda_init: GGML_CUDA_FORCE_CUBLAS: no
ggml_cuda_init: found 1 CUDA devices:
  Device 0: NVIDIA GeForce GTX 1070, compute capability 6.1, VMM: yes
llm_load_tensors: ggml ctx size =    0.88 MiB
llm_load_tensors: offloading 48 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 49/49 layers to GPU
llm_load_tensors:        CPU buffer size =  2917.78 MiB
llm_load_tensors:      CUDA0 buffer size =  6082.05 MiB
..........................................................................................
llama_new_context_with_model: n_ctx      = 512
llama_new_context_with_model: n_batch    = 512
llama_new_context_with_model: n_ubatch   = 512
llama_new_context_with_model: flash_attn = 0
llama_new_context_with_model: freq_base  = 10000.0
llama_new_context_with_model: freq_scale = 1
llama_kv_cache_init:      CUDA0 KV buffer size =   192.00 MiB
llama_new_context_with_model: KV self size  =  192.00 MiB, K (f16):   96.00 MiB, V (f16):   96.00 MiB
llama_new_context_with_model:  CUDA_Host  output buffer size =     0.98 MiB
llama_new_context_with_model:      CUDA0 compute buffer size =   508.00 MiB
llama_new_context_with_model:  CUDA_Host compute buffer size =    23.00 MiB
llama_new_context_with_model: graph nodes  = 2694
llama_new_context_with_model: graph splits = 98

system_info: n_threads = 4 / 4 | AVX = 0 | AVX_VNNI = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | AVX512_BF16 = 0 | FMA = 0 | NEON = 0 | SVE = 0 | ARM_FMA = 0 | F16C = 0 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | SSSE3 = 1 | VSX = 0 | MATMUL_INT8 = 0 | LLAMAFILE = 0 | 
sampling: 
	repeat_last_n = 64, repeat_penalty = 1.000, frequency_penalty = 0.000, presence_penalty = 0.000
	top_k = 40, tfs_z = 1.000, top_p = 0.950, min_p = 0.050, typical_p = 1.000, temp = 0.800
	mirostat = 0, mirostat_lr = 0.100, mirostat_ent = 5.000
sampling order: 
CFG -> Penalties -> top_k -> tfs_z -> typical_p -> top_p -> min_p -> temperature 
generate: n_ctx = 512, n_batch = 2048, n_predict = -1, n_keep = 0


 Hallo, wie geht es dir? [end of text]

llama_print_timings:        load time =  109799.75 ms
llama_print_timings:      sample time =       3.74 ms /     9 runs   (    0.42 ms per token,  2405.13 tokens per second)
llama_print_timings: prompt eval time =     213.72 ms /     9 tokens (   23.75 ms per token,    42.11 tokens per second)
llama_print_timings:        eval time =     498.63 ms /     8 runs   (   62.33 ms per token,    16.04 tokens per second)
llama_print_timings:       total time =    1193.71 ms /    17 tokens
Log end

@fairydreaming
Copy link
Collaborator

fairydreaming commented Jul 10, 2024

@steampunque I'm afraid that interactive mode is currently not supported for encoder-decoder models like T5.

@steampunque
Copy link

@steampunque I'm afraid that interactive mode is currently not supported for encoder-decoder models like T5.

No problem and thank you for adding this great T5 support! I do not use cli (exept to submit debug issues), only server. I will have a look into patching my server based on your notes in this thread if adding T5 for it is not on your roadmap for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-unconfirmed low severity Used to report low severity bugs in llama.cpp (e.g. cosmetic issues, non critical UI glitches)
Projects
None yet
Development

No branches or pull requests

3 participants