-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't run glm-4-9b-chat on cuda 12 #18
Comments
According to the log message below, the model name in the request was incorrectly set to
|
In case there is something wrong with the chatbot UI, I send an API request to the model
The log is as follows:
|
A cuda error was triggered. It seems like a memory issue. @hydai Could you please help with the issue? Thanks! |
Is this model supported by llama.cpp? |
Yes. |
Could you please try llama.cpp with CUDA enabled to run this model? Since this is an internal error in the llama.cpp CUDA backend, I would like to know if this is an upstream issue. |
When I run glm-4-9b-chat-Q5_K_M.gguf on the Cuda 12 machine, the API server can be started successfully. However, when I send a question, the API server will crash.
The command I used to start the API server is as follows:
Here is the error message
Versions
The text was updated successfully, but these errors were encountered: