From e7f97ad8c046c6f8bb3db6a3ca6b54369ec2db8f Mon Sep 17 00:00:00 2001 From: "Wang, Yi" Date: Thu, 17 Oct 2024 23:01:56 +0800 Subject: [PATCH] Add peft generation example (#1427) --- examples/text-generation/README.md | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/examples/text-generation/README.md b/examples/text-generation/README.md index 22390d649..b99d3d8a6 100755 --- a/examples/text-generation/README.md +++ b/examples/text-generation/README.md @@ -214,6 +214,22 @@ python run_generation.py \ > The prompt length is limited to 16 tokens. Prompts longer than this will be truncated. +### Use PEFT models for generation + +You can also provide the path to a PEFT model to perform generation with the argument `--peft_model`. + +For example: +```bash +python run_generation.py \ +--model_name_or_path meta-llama/Llama-2-7b-hf \ +--use_hpu_graphs \ +--use_kv_cache \ +--batch_size 1 \ +--bf16 \ +--max_new_tokens 100 \ +--prompt "Here is my prompt" \ +--peft_model yard1/llama-2-7b-sql-lora-test +``` ### Using growing bucket optimization