Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

convert-llama-ggml-to-gguf.py does not work after #3633 #4631

Closed
cebtenzzre opened this issue Dec 25, 2023 · 2 comments · Fixed by #5041
Closed

convert-llama-ggml-to-gguf.py does not work after #3633 #4631

cebtenzzre opened this issue Dec 25, 2023 · 2 comments · Fixed by #5041

Comments

@cebtenzzre
Copy link
Collaborator

cebtenzzre commented Dec 25, 2023

$ ./convert-llama-ggml-to-gguf.py -i LLAMA2-13B-Holodeck-1-Q8_0.ggml -o LLAMA2-13B-Holodeck-1.Q8_0.gguf -m LLAMA2-13B-Holodeck-1
<snip>
Traceback (most recent call last):
  File "/home/jared/src/forks/llama.cpp/./convert-llama-ggml-to-gguf.py", line 445, in <module>
    main()
  File "/home/jared/src/forks/llama.cpp/./convert-llama-ggml-to-gguf.py", line 425, in main
    (params_override, vocab_override, special_vocab) = handle_metadata(cfg, model.hyperparameters)
                                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/jared/src/forks/llama.cpp/./convert-llama-ggml-to-gguf.py", line 374, in handle_metadata
    vocab = convert.load_vocab(
            ^^^^^^^^^^^^^^^^^^
AttributeError: module 'convert' has no attribute 'load_vocab'

cc @strutive07 @KerfuffleV2

@KerfuffleV2
Copy link
Collaborator

You can probably just remove the script at this point, it was originally just intended as a transitional thing and I doubt many people are converting GGML files these days.

I took a quick look at it and it doesn't seem like there's a simple fix. The function that returned a Vocab seems to just be gone now.

@strutive07
Copy link
Contributor

@cebtenzzre
I don't have a ggml file to convert, so I can't test your code, but could you please diff the LLAMA2-13B-Holodeck-1-Q8_0.ggml file below to see if it converts well?
Or if you have a ggml file to test, could you give me a path to download it and I will try to test it.

diff --git a/convert-llama-ggml-to-gguf.py b/convert-llama-ggml-to-gguf.py
index e359330..de14534 100755
--- a/convert-llama-ggml-to-gguf.py
+++ b/convert-llama-ggml-to-gguf.py
@@ -371,9 +371,10 @@ def handle_metadata(cfg, hp):
         params = convert.Params.loadOriginalParamsJson(fakemodel, orig_config_path)
     else:
         raise ValueError('Unable to load metadata')
-    vocab = convert.load_vocab(
+    vocab = convert.VocabLoader(
+        params,
         cfg.vocab_dir if cfg.vocab_dir is not None else cfg.model_metadata_dir,
-        cfg.vocabtype)
+    )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants