Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discrepancy in Model Performance When Reproducing Experiment #25

Open
tangzhy opened this issue Aug 16, 2024 · 1 comment
Open

Discrepancy in Model Performance When Reproducing Experiment #25

tangzhy opened this issue Aug 16, 2024 · 1 comment

Comments

@tangzhy
Copy link

tangzhy commented Aug 16, 2024

image

Hi,

I've been attempting to reproduce an experiment involving the finetuning of the Llama-2-7b-hf model, specifically using a random 5% of training data, using open-instruct finetune_with_accelerate.sh. I adhered to the hyperparameters outlined in your paper:

  • learning_rate = 2e-5
  • total_batch_size = 128
  • warmup_ratio = 0.03
  • lr_scheduler_type = linear
  • weight_decay = 0.0
  • num_train_epochs = 4

Despite following these settings, the performance of my model on the MMLU benchmark is significantly worse than yours as shown in the screenshot. Is this discrepancy in results anticipated? The gap in performance seems larger than what one might reasonably expect.

Could you please confirm if my hyperparameters are fully aligned with those used in your setup? Additionally, any details about your SFT hyperparameters would be greatly appreciated.

Thank you for your assistance.

@xiamengzhou
Copy link
Collaborator

Hi! Apologies for the delayed response. I believe this is the setup we used in our work, but it's a bit tricky to pinpoint the exact issue here. It could be due to a couple of reasons:

  • You might have used a different prompt for MMLU?
  • It could simply be due to the variance resulted from selecting data randomly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants