Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

🐾 Process-supervised RM Trainer #2127

Merged
merged 140 commits into from
Dec 13, 2024
Merged

Conversation

gaetanlop
Copy link
Contributor

@gaetanlop gaetanlop commented Sep 26, 2024

What does this PR do?

Adding support for process-supervised reward training to TRL as requested in #2110 .

List of papers using PRMs: [1], [2], [3], [4]...

Fixes # (issue)

#2110

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a GitHub issue? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines.
  • Did you write any new necessary tests?

Who can review?

@lewtun @kashif

@gaetanlop gaetanlop marked this pull request as draft September 26, 2024 03:15
@lewtun
Copy link
Member

lewtun commented Sep 26, 2024

This is awesome @gaetanlop ! Would you like some early feedback on the PR or would you prefer I wait a bit until it's more polished?

@gaetanlop
Copy link
Contributor Author

Hey @lewtun, thank you for the message. Currently, the only files that are more or less ready are prm_trainer.py and prm_config.py. The rest are just placeholders that I haven’t had the opportunity to work on yet.

Implementing a PRMs seems to be pretty straighforward, it seems to be a token classification task where only prediction for the last token of each step gets assigned a label and other tokens are ignored during loss calculation.

If the dataset isn’t pre-tokenized, I assume it should contain the following columns:

  • prompt: Either a string or past messages
  • steps: A list of strings
  • labels: A list of integers corresponding to the label associated to each step

Are you aware of an HF dataset to train PRMs for the example file? Also, how can I add a new subset to the trl-internal-testing/zen dataset to support stepwise reward models for the unit test of the prm_trainer?

Thanks again for your time!

@gaetanlop gaetanlop marked this pull request as ready for review September 28, 2024 18:34
@gaetanlop
Copy link
Contributor Author

gaetanlop commented Sep 28, 2024

PR ready for review. I have changed the naming conventions that I used before prm to the suggested naming in #2110 stepwise.

Tests: I created a dummy_dataset but we should add a subset to trl-internal-testing/zen as done in other scripts.
Example: The example is currently using a placeholder for the dataset name as to the best of my knowledge trl didn't release a dataset for stepwise reasoning on HF. We should add this too.

Copy link
Member

@lewtun lewtun left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the very clean PR @gaetanlop - this looks great! I've left some minor suggestions regarding the structure, but aside from that and having a smallish dataset in the right format we can sanity check that the accuracy goes up, loss goes down etc I think this is quite close to being ready

docs/source/_toctree.yml Outdated Show resolved Hide resolved
docs/source/stepwise_reward_trainer.mdx Outdated Show resolved Hide resolved
docs/source/dataset_formats.mdx Outdated Show resolved Hide resolved
examples/scripts/stepwise_reward_modeling.py Outdated Show resolved Hide resolved
trl/trainer/stepwise_reward_config.py Outdated Show resolved Hide resolved
trl/trainer/stepwise_reward_config.py Outdated Show resolved Hide resolved
trl/trainer/stepwise_reward_trainer.py Outdated Show resolved Hide resolved
trl/trainer/stepwise_reward_trainer.py Outdated Show resolved Hide resolved
@gaetanlop gaetanlop changed the title [DRAFT] Process-supervised RM Trainer Process-supervised RM Trainer Oct 1, 2024
@gaetanlop
Copy link
Contributor Author

gaetanlop commented Oct 1, 2024

Thanks for looking at this @lewtun. Seems like trl-internal-testing/zen is the dataset you are using for testing. I have done a PR to trl-lib/zen, should I also PR trl-internal-testing/zen to add 19 samples of PRM800K for testing or are you handling it on your side (it looks like they are both the same dataset)?

@gaetanlop
Copy link
Contributor Author

The new curves look way more reasonable! Thanks for finding the bug @qgallouedec

Copy link
Member

@qgallouedec qgallouedec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great work all!!

@skepsun
Copy link

skepsun commented Dec 11, 2024

Thank you for your contribution to the development of the well-written stepwise RM trainer. To further advance RLHF with PRM, some RL trainers, such as the PPO trainer, could potentially benefit from PRMs trained using a stepwise RM trainer. Several points may be considered:

  1. A suitable step_separator must be defined, ensuring it is always tokenized into a fixed token ID.
  2. Implementing a generate function that outputs one step at a time is unnecessary. Instead, an SFT model trained on a CoT dataset (with a defined step_separator in the answers) is essential to prepare PPO with PRM. Thus, CoT SFT datasets should be constructed, albeit in small quantities, as SFT is only performed to ensure the model outputs CoT answers in a specific format.
  3. The get_reward function already supports scoring across all positions, as it is also utilized by the value model. To integrate PRM scores, an additional mask can be used to extract scores at step_separator positions and directly add them to the non_score_reward.
  4. The score column in the wandb table should include lists of PRM scores with varying lengths.

@qgallouedec
Copy link
Member

qgallouedec commented Dec 12, 2024

Hey @gaetanlop. We were thinking that maybe renaming the trainer to PRMTrainerwould make more sense. Do you agree?
If so, can you make the edit? I can do it as well if you want me to.

@gaetanlop
Copy link
Contributor Author

gaetanlop commented Dec 13, 2024

Hello, sounds good to me, that's how I named it in my initial commits.

docs/source/prm_trainer.mdx Outdated Show resolved Hide resolved
trl/trainer/prm_trainer.py Outdated Show resolved Hide resolved
@qgallouedec qgallouedec merged commit 179ba53 into huggingface:main Dec 13, 2024
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants