Open
Description
Interesting work, thanks for your open-source!
Could you please provide more details on fine-tuning a llama?
To be specific: I suppose the training needs 'all_train_prompt.jsonl', right? Does the training of llama also need 10 epochs like GPT? Did you use lora or fp16 durning fine-tuning? How to select the best checkpoints, or just use the parameters after 10 epochs?
Looking forward to your reply. Thank you very much.
Metadata
Metadata
Assignees
Labels
No labels