-
Notifications
You must be signed in to change notification settings - Fork 26.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support inference with LyCORIS GLora networks #13610
Conversation
Does it work with SDXL? |
I haven't Implemented the glora for convolution since it has more than one way to implement the W' = A + WB. (For A and B are both low-rank matrix, in Conv layer. WB have more than one option) I think I will directly impl B as linear layer between channels directly when I have time |
@oliverban Yes, I've only tested it with SDXL @KohakuBlueleaf Thank you! Even without conv layer, GLora is making way better images than any normal Lora I've trained. |
@v0xie That's great! |
Thanks for your work, @v0xie! GLoRA looks promising, but I can't make the inference work with SD1.5. I only get seemingly random noise like the image below. The training with bmaltais' kohya_ss goes well, the samples generated during the training are looking good, your new module is correctly called, and yet… Do you have any idea what could cause the issue? |
That's super strange that's it's doing that. I hadn't tested SD1.5 with GLora before so I just trained one with latest dev branch of sd-scripts to see and it appears work correctly. Can you try this LoRA and see if you're able to generate images with it? https://huggingface.co/v0xie/sd15-glora-monster_toy/blob/main/monster_toy_sd15_glora-000010.safetensors Both the picture and safetensors have the metadata embedded with training/inference settings. |
Yeah, your LoRA works great. Thanks to the embedded metadata, I've been able to track down the problem, and it seems to be It's strange, though, because as I said, the samples generated during the training were looking good, so I don't think there is a problem with the training; and yet, I don't see how |
For me, GLoRA looks totally fine during training previews as previously mentioned, but for inference, anything above 0.3 weight produces fried images or noise nonsense. I have tested multiple configurations of lr, dim and alphas, but the problem doesn't go away |
It should be scaled in inference too, but this implementation misses scaling, so it should work correctly only with alpha==network_dim. Everything else will be broken. |
Description
This PR adds support for inference of networks that are trained with LyCORIS GLora. The implementation is based on the excellent implementation by @KohakuBlueleaf here: https://github.com/KohakuBlueleaf/LyCORIS/blob/main/lycoris/modules/glora.py
Changes:
network_glora.py
in extensions-builtin/Lora.Other notes:
--network_module=lycoris.kohya --network_args "conv_dim=4" "conv_alpha=4" "algo=glora"
Checklist: