-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug: ggml.c:5278: !ggml_is_transposed(a) #8398
Comments
@guinmoon Not sure if this is the cause, but to use T5 models you have to call some new additional API, see sources of llama-cli for reference: llama.cpp/examples/main/main.cpp Lines 531 to 547 in e500d61
Basically you have to pass the prompt tokens to |
There is none of this code in the server, so I guess server has not been updated to support T5 models yet? I also tried to load madlad400 and got an assert here in the server:
Tried also to run madlad400 with cli but it just responds with blank space for <2de> Hello how are you? |
That's right.
Works just fine for me (current master):
|
@steampunque Let me know what exactly was the model you were trying to run. |
Thank you very much, this is a working solution for me! |
I tried a Q6_K quant of this model: I think my steps were essentially identical to what you showed in your response. Still may be just some pilot error somewhere on my part, will try again with the 10b you used. Thanks for your reply. |
@steampunque You can also verify if the 7b model answers your prompt correctly in HF transformers library. Another idea to try is to use f32 model instead of quantized one. |
There is some kind of a problem when using interactive mode with cli with this model, it just comes back with spaces:
If I send in the prompt as in your example it does ouput:
|
@steampunque I'm afraid that interactive mode is currently not supported for encoder-decoder models like T5. |
No problem and thank you for adding this great T5 support! I do not use cli (exept to submit debug issues), only server. I will have a look into patching my server based on your notes in this thread if adding T5 for it is not on your roadmap for now. |
What happened?
For some reason, when I use the llama.cpp code in my project on T5 models, I get this error:
At the same time llama-cli built with the same sources works fine.
Who can tell me what the problem is?
Name and Version
Release b3347
What operating system are you seeing the problem on?
No response
Relevant log output
No response
The text was updated successfully, but these errors were encountered: