This repository has been archived by the owner on Jun 24, 2024. It is now read-only.
This repository has been archived by the owner on Jun 24, 2024. It is now read-only.
Open
Description
Tried using the cli application to see how far it had come from being llama-rs, and noticed that an error popped up using one of the newer WizardLM uncensored models using the GGMLv3 method,
llm llama chat --model-path .\Wizard-Vicuna-7B-Uncensored.ggmlv3.q5_1.bin
⣾ Loading model...Error:
0: Could not load model
1: invalid file format version 3
Backtrace omitted. Run with RUST_BACKTRACE=1 environment variable to display it.
Run with RUST_BACKTRACE=full to include source snippets.
Am I using it the wrong way or is it not supported yet?
Metadata
Assignees
Labels
No labels