-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for gguf #8
Comments
I updated to the latest version of llama.cpp locally and was able to get a gguf model to run without too many changes. However, there are a still a few updates in progress for llama.cpp that I'll probably wait on before making a new release:
|
phronmophobic
added a commit
that referenced
this issue
Sep 30, 2023
Fixed in v0.8. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
The latest llama.cpp development has deprecated the ggml format in favor of a new gguf format.
llama.cpp has chosen to break their API and make ggml models useless. The goal for llama.clj is to upgrade without breaking backwards compatibility. More research is required, but the initial plan is something like:
The text was updated successfully, but these errors were encountered: