-
Notifications
You must be signed in to change notification settings - Fork 10
Open
Description
Hi! Thank you for sharing the training code and resources. I'm currently training on the FFHQ dataset using your setup with the following config:
- Total batch size = 12 (learning rate scaled proportionally, e.g. 4e-6 for generator and 8e-7 for regularization)
- ~100k steps
However, grid-like periodic artifacts remain in the outputs, showing no significant improvement during training. Could you clarify how to balance loss function weights or learning rate for better alignment with the paper’s outcomes?
SHH-Han and DPloved
Metadata
Metadata
Assignees
Labels
No labels
