Skip to content

Resume Function in Dreambooth Lora Training is Broken since 0.19.0 #4936

Closed
@dreamraster

Description

@dreamraster

Describe the bug

Resume functionality in train_dreambooth_lora.py example seems to be completely broken since 0.19.0 and narrowed down to #3778
Reverting 0.18.2 style temp_pipeline does seem to fix textencoder resume but not unet.

Reproduction

The Repro runs training with 2 resumes for the 0.18.2 and 0.19.

Add a few images to input folder and run the repro.bat

For version 0.18.2
Both with and without train_text_encoder, a new pytorch_lora_weights.bin is created properly as resume works just fine.

For version 0.19.3
Both with and without train_text_encoder, a new pytorch_lora_weights.bin is not created on resume steps and hence pytorch_lora_weights.bin is always the same.

Resume functionality seems to be completely broken since 0.19.0 and narrowed down to #3778
Reverting 0.18.2 style temp_pipeline does seem to fix textencoder resume but not unet.

Please remove txt extension of the files

train_dreambooth_lora_0.18.2_repro.py.txt
train_dreambooth_lora_0.19.0_repro.py.txt
repro.bat.txt

Logs

No response

System Info

  • diffusers version: 0.19.0
  • Platform: Windows-10-10.0.22621-SP0
  • Python version: 3.10.4
  • PyTorch version (GPU?): 2.0.1+cu118 (True)
  • Huggingface_hub version: 0.16.4
  • Transformers version: 4.33.0
  • Accelerate version: 0.22.0
  • xFormers version: not installed
  • Using GPU in script?: yes
  • Using distributed or parallel set-up in script?: no

Who can help?

@williamberman, @sayakpaul, @yiyixuxu

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingstaleIssues that haven't received updates

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions