Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lora 🟩 Tips, discussion and feedbacks #136

Open
4lt3r3go opened this issue Dec 13, 2024 · 4 comments
Open

Lora 🟩 Tips, discussion and feedbacks #136

4lt3r3go opened this issue Dec 13, 2024 · 4 comments

Comments

@4lt3r3go
Copy link

4lt3r3go commented Dec 13, 2024

Since Lora is now an thing I'm opening a single post here for everyone to start talking about it.

Maybe someone is willing to provide some links or guide on how to do training,
preferably on Windows, so we can give it a try.
the only "tool" i've found for the purpose is this one https://github.com/tdrussell/diffusion-pipe from @tdrussell
but i have no idea how to run it honestly, doesn't really look like something a monkey brained like me could figure 😁

@sdqq1234
Copy link

lora node is not working : #135
Is this how it is used?

@kijai
Copy link
Owner

kijai commented Dec 13, 2024

There seems to be some cases where it doesn't load, memory related I think... latest update, while slowing the loading down, should be safer.

@pundabadu-1
Copy link

lora node is not working : #135 Is this how it is used?

When I include "red braided hair" in the prompt, Makima appears in the generated video, even though the LoRA creator didn’t mention it.

@rayryeng
Copy link

rayryeng commented Dec 14, 2024

Crossposting from the diffusion-pipe repo: tdrussell/diffusion-pipe#6

I would like to reiterate that it would be very useful to have even a simple tldr on how to train a LoRA to plug into the HunyuanVideoWrapper.

However, from what I understand, it's a matter of using the diffusion-pipe repo to train one by first changing some paths in the toml files and having a dataset directory that contains videos with associated captions, then proceeding with the training. You'd then use the LoRA node to link to the trained model for the workflow. However, I'm sure there are other nuances that would be beneficial to include in the summary. In particular, I would love to know how we can prevent overfitting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants