You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've been attempting to reproduce an experiment involving the finetuning of the Llama-2-7b-hf model, specifically using a random 5% of training data, using open-instruct finetune_with_accelerate.sh. I adhered to the hyperparameters outlined in your paper:
learning_rate = 2e-5
total_batch_size = 128
warmup_ratio = 0.03
lr_scheduler_type = linear
weight_decay = 0.0
num_train_epochs = 4
Despite following these settings, the performance of my model on the MMLU benchmark is significantly worse than yours as shown in the screenshot. Is this discrepancy in results anticipated? The gap in performance seems larger than what one might reasonably expect.
Could you please confirm if my hyperparameters are fully aligned with those used in your setup? Additionally, any details about your SFT hyperparameters would be greatly appreciated.
Thank you for your assistance.
The text was updated successfully, but these errors were encountered:
Hi! Apologies for the delayed response. I believe this is the setup we used in our work, but it's a bit tricky to pinpoint the exact issue here. It could be due to a couple of reasons:
You might have used a different prompt for MMLU?
It could simply be due to the variance resulted from selecting data randomly.
Hi,
I've been attempting to reproduce an experiment involving the finetuning of the Llama-2-7b-hf model, specifically using a random 5% of training data, using open-instruct finetune_with_accelerate.sh. I adhered to the hyperparameters outlined in your paper:
learning_rate = 2e-5
total_batch_size = 128
warmup_ratio = 0.03
lr_scheduler_type = linear
weight_decay = 0.0
num_train_epochs = 4
Despite following these settings, the performance of my model on the MMLU benchmark is significantly worse than yours as shown in the screenshot. Is this discrepancy in results anticipated? The gap in performance seems larger than what one might reasonably expect.
Could you please confirm if my hyperparameters are fully aligned with those used in your setup? Additionally, any details about your SFT hyperparameters would be greatly appreciated.
Thank you for your assistance.
The text was updated successfully, but these errors were encountered: