Skip to content

add HF lora convert script #8

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Jul 8, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
fix outfile
  • Loading branch information
ngxson committed Jul 8, 2024
commit 95b3eb057b0261a48aeadcb1524a1f58d7ef39cc
4 changes: 2 additions & 2 deletions convert_lora_to_gguf.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ def parse_args() -> argparse.Namespace:
description="Convert a huggingface PEFT LoRA adapter to a GGML compatible file")
parser.add_argument(
"--outfile", type=Path,
help="path to write to; default: based on input.",
help="path to write to; default: based on input. {ftype} will be replaced by the outtype.",
)
parser.add_argument(
"--outtype", type=str, choices=["f32", "f16", "bf16", "q8_0"], default="f16",
Expand Down Expand Up @@ -77,7 +77,7 @@ def parse_args() -> argparse.Namespace:
fname_out = args.outfile
else:
# output in the same directory as the model by default
fname_out = dir_lora / 'ggml-lora.gguf'
fname_out = dir_lora / 'ggml-lora-{ftype}.gguf'

if os.path.exists(input_model):
lora_model = torch.load(input_model, map_location="cpu")
Expand Down
Loading