You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
GPT4All uses a newer version of llama.cpp which can handle the new ggml formats. Currently this throws an error similar to the following if you attempt to load a model of a newer version:
error loading model: unknown (magic, version) combination: 67676a74, 00000002; is this really a GGML file?
The text was updated successfully, but these errors were encountered:
I looked it up and tried to build the new gpt4all-backend.
From quick look it seems it only supports dynamic linking unlike this project. There's a good reason for dynamic linking because then one does not have to have separate builds for avx1 and avx2. And it supports multiple llama versions at the same time. But that also means the binary cannot be compiled on one machine and just trust it works on another.
With this project one is now unfortunately stuck with the old format.
It would be good to leave this issue open so that people know that it does not work with the new ggml formats.
GPT4All uses a newer version of llama.cpp which can handle the new ggml formats. Currently this throws an error similar to the following if you attempt to load a model of a newer version:
The text was updated successfully, but these errors were encountered: