Closed
Description
Conversion of latest GPT4All-J ggml binary obtained from app installer
3785248281 Apr 14 22:03 models/gpt4all/ggml-gpt4all-j.bin
fails:
./main -m models/gpt4all/ggml-gpt4all-j.bin -t 4 -n 512 --repeat_penalty 1.0 --color -ins -r "User:" -f prompts/reason-act.txt
main: seed = 1681502915
llama_model_load: loading model from 'models/gpt4all/ggml-gpt4all-j.bin' - please wait ...
llama_model_load: invalid model file 'models/gpt4all/ggml-gpt4all-j.bin' (too old, regenerate your model files or convert them with convert-unversioned-ggml-to-ggml.py!)
but running
python convert-unversioned-ggml-to-ggml.py models/gpt4all/ggml-gpt4all-j.bin models/llama/tokenizer.model
I have no temp file generated while running the migration tool from older ggml version to the latest I get
llama.cpp % python migrate-ggml-2023-03-30-pr613.py models/gpt4all/ggml-gpt4all-j.bin models/gpt4all/ggml-gpt4all-j-new.bin
Traceback (most recent call last):
File "/Users/loretoparisi/Documents/Projects/llama.cpp/migrate-ggml-2023-03-30-pr613.py", line 311, in <module>
main()
File "/Users/loretoparisi/Documents/Projects/llama.cpp/migrate-ggml-2023-03-30-pr613.py", line 272, in main
tokens = read_tokens(fin, hparams)
File "/Users/loretoparisi/Documents/Projects/llama.cpp/migrate-ggml-2023-03-30-pr613.py", line 133, in read_tokens
word = fin.read(length)
ValueError: read length must be non-negative or -1
Metadata
Metadata
Assignees
Labels
No labels