Better training loop implementation#8820
Conversation
|
@KohakuBlueleaf Train-LoRA supports training for FLUX and SDXL. Are there any plans to support LoRA training for HiDream or other models as well? |
In theory any supportef model in comfy should be trainable |
I tried training with HiDream, but it failed. So I assumed it wasn’t supported and stopped there. :( |
I will check the details, Lora training node is basically in alpha, not even beta. There are no clear plan now for model coverage or specific features. I need to make it work (on at least one model) smoothly first |
This pull request suggest a new implementation of the training loop in lora training node.
Basically we follow the idea of what sampler for, we move the training loop from training node to train sampler to allow the resource management around the sampler only execute once. Which means we can save tons of time for setting up the sampler/guider.
This implementation can significantly improve the training speed for about 30~50%. depends on hardware and args.