Skip to content

Details on fine-tuning a Llama #10

Open
@trestad

Description

@trestad

Interesting work, thanks for your open-source!

Could you please provide more details on fine-tuning a llama?

To be specific: I suppose the training needs 'all_train_prompt.jsonl', right? Does the training of llama also need 10 epochs like GPT? Did you use lora or fp16 durning fine-tuning? How to select the best checkpoints, or just use the parameters after 10 epochs?

Looking forward to your reply. Thank you very much.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions