Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor Online DPO #1839

Merged
merged 103 commits into from
Aug 28, 2024
Merged

Refactor Online DPO #1839

merged 103 commits into from
Aug 28, 2024

Conversation

vwxyzjn
Copy link
Contributor

@vwxyzjn vwxyzjn commented Jul 17, 2024

This PR refactors the OnlineDPOTrainer to have an API that is closer to that of the offlineDPOTrainer. It also introduces a LogCompletionsCallback that produces a table of completions on WandB

@vwxyzjn vwxyzjn requested a review from qgallouedec July 17, 2024 03:04
@qgallouedec
Copy link
Member

qgallouedec commented Jul 18, 2024

After this one is merged #1598 we'll probably only need to do

 trainer = OnlineDPOTrainer(..., judge=OpenAIJudge())

EDIT: The PR has gone in another direction, we'll integrate the judges later.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Base automatically changed from online-trainer-refactor to main July 18, 2024 18:35
@lewtun
Copy link
Member

lewtun commented Aug 27, 2024

I've tested the code works as intended for the TL;DR experiments of 1B and 2.8B and 6.9B:

# 1B - fits with DDP
accelerate launch --config_file examples/accelerate_configs/multi_gpu.yaml \
    examples/scripts/dpo_online.py \
    --model_name_or_path trl-lib/pythia-1b-deduped-tldr-sft  \
    --reward_model_path trl-lib/pythia-1b-deduped-tldr-rm \
    --dataset_name trl-lib/tldr \
    --learning_rate 5.0e-7 \
    --output_dir pythia-1b-deduped-tldr-online-dpo \
    --beta 0.1 \
    --per_device_train_batch_size 8 \
    --gradient_accumulation_steps 2 \
    --num_train_epochs 3 \
    --max_new_tokens 53 \
    --warmup_ratio 0.1 \
    --missing_eos_penalty 1.0 \
    --logging_steps 20 \
    --save_steps 0.1 \
    --push_to_hub

# 2.8B - fits with ZeRO-2
accelerate launch --config_file examples/accelerate_configs/deepspeed_zero2.yaml \
    examples/scripts/dpo_online.py \
    --model_name_or_path trl-lib/pythia-2.8b-deduped-tldr-sft  \
    --reward_model_path trl-lib/pythia-2.8b-deduped-tldr-rm \
    --dataset_name trl-lib/tldr \
    --learning_rate 5.0e-7 \
    --output_dir pythia-2.8b-deduped-tldr-online-dpo \
    --beta 0.1 \
    --per_device_train_batch_size 8 \
    --gradient_accumulation_steps 2 \
    --num_train_epochs 3 \
    --max_new_tokens 53 \
    --warmup_ratio 0.1 \
    --missing_eos_penalty 1.0 \
    --bf16 \
    --logging_steps 20 \
    --save_steps 0.1 \
    --push_to_hub

# 6.9B - fits with ZeRO-2 and gradient checkpointing
accelerate launch --config_file examples/accelerate_configs/deepspeed_zero2.yaml \
    examples/scripts/dpo_online.py \
    --model_name_or_path trl-lib/pythia-6.9b-deduped-tldr-sft  \
    --reward_model_path trl-lib/pythia-6.9b-deduped-tldr-rm \
    --dataset_name trl-lib/tldr \
    --learning_rate 5.0e-7 \
    --output_dir pythia-6.9b-deduped-tldr-online-dpo \
    --beta 0.1 \
    --per_device_train_batch_size 4 \
    --gradient_accumulation_steps 4 \
    --num_train_epochs 3 \
    --max_new_tokens 53 \
    --warmup_ratio 0.1 \
    --missing_eos_penalty 1.0 \
    --bf16 \
    --gradient_checkpointing \
    --logging_steps 20 \
    --save_steps 0.1 \
    --push_to_hub

What is currently not working is the ZeRO-3, which produces a deadlock in the LogCompletionsCallback. I am not sure exactly why this is happening, but we can leave it for a follow-up PR

lewtun and others added 2 commits August 27, 2024 20:24
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Copy link
Collaborator

@edbeeching edbeeching left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One minor comment, otherwise LGTM. Thanks.

@lewtun lewtun merged commit e755eee into main Aug 28, 2024
10 checks passed
@lewtun lewtun deleted the online-dpo-llmjudge branch August 28, 2024 13:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants