-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Description
Currently, is there any clipping/clamping done to the latents after using Noise Offset?
In my experience, I was training a LoRA for a character that wears a white uniform.
Everything works fine except... sometimes the uniform comes out as black when using the LoRA in generations.
Previously, I was also training a LoRA for a character that wears a blue dress.
Again, everything works fine except the dress often comes out red instead.
I highly suspect that, when applying the noise offsets, the resulting latents have values outside of what the model can handle, causing some sort of overflow, resulting in white becoming black as I experienced.
Therefore, I experimented by manually add a torch.clamp before the return of the apply_noise_offset function.
And as a result, the white uniform no longer becomes black during generation!
Is it just a coincidence? Or can someone verify this interactions? And perhaps implement a fix?