Skip to content

Commit

Permalink
Update finetune_llama3.py
Browse files Browse the repository at this point in the history
  • Loading branch information
marcopoli authored May 11, 2024
1 parent 1704d0a commit eaf49a6
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion model_adaptation/finetune_llama3.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@

max_seq_length = 8192 # Choose any! We auto support RoPE Scaling internally!
dtype = None # None for auto detection. Float16 for Tesla T4, V100, Bfloat16 for Ampere+
load_in_4bit = False # Use 4bit quantization to reduce memory usage. Can be False.
load_in_4bit = True # Use 4bit quantization to reduce memory usage. Can be False.
model_name = "swap-uniba/LLaMAntino-3-ANITA-8B-Inst-DPO-ITA"

model, tokenizer = FastLanguageModel.from_pretrained(
Expand Down

0 comments on commit eaf49a6

Please sign in to comment.