Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Fix] remove setting lr for T5 text encoder when using prodigy in flux dreambooth lora script #9473

Merged
merged 7 commits into from
Oct 28, 2024
Prev Previous commit
Next Next commit
fix: removed setting of text encoder lr for T5 as it's not being tuned
  • Loading branch information
biswaroop1547 authored Sep 19, 2024
commit 6c6724fa2af884619a749a9acbec526de75e0aef
1 change: 0 additions & 1 deletion examples/dreambooth/train_dreambooth_flux.py
Original file line number Diff line number Diff line change
Expand Up @@ -1288,7 +1288,6 @@ def load_model_hook(models, input_dir):
# changes the learning rate of text_encoder_parameters_one and text_encoder_parameters_two to be
# --learning_rate
params_to_optimize[1]["lr"] = args.learning_rate
params_to_optimize[2]["lr"] = args.learning_rate

optimizer = optimizer_class(
params_to_optimize,
Expand Down
Loading