-
Notifications
You must be signed in to change notification settings - Fork 11k
Support Multi/InfiniteTalk #10179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Support Multi/InfiniteTalk #10179
Conversation
|
Y'all, this project's bout to make Comfyui famous — they been too quiet on voice tech, sheesh :) |
Kosinkadink
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey, thanks for the PR and sorry for the wait! I reviewed this PR with comfy just now. I left individual comments, but here is the summary:
- Make
block_idxgo int transformer_options instead in the for loops of the forward function. This will be something we want for attention tricks in the near future so will keep from block_idx being implemented in multiple ways. - Even though it removes a lot of the useability right now, remove the looping components from this PR and instead make it work wiht just the first 'window' without the looping. I will consult with you and Jacob on implementing the standard for looping stuff in ComfyUI as a separate PR, that we can then use to bring back this functionality for this + self forcing models.
- q and k should not be returned as it is an unknown whether it will cause higher memory peaks for all Wan stuff. Instead, the optimized_attention call should be overridable so that the extra attention call thingy can be done here, with the q and k that you need stored in transformer_options for the cross_attn patch to use.
More details are included in the comments. Let me know if you have any questions!
|
I hope this PR can be completed as soon as possible. I want to experience it in the native workflow. Thank you. |
…turn q and k from self_attn
|
Test result with latest commit that includes workflow: native_MultiTalk_testing_00012-audio.mp4The inputs used can be found here: https://github.com/MeiGen-AI/InfiniteTalk/tree/main/examples/multi |
|
+label:notify:jk |
|
This PR took too long time |
|
yes ,i agree. Result looks good @comfyanonymous |
|
This is the first time kj tries to apply his wrapper to native workflow. It meas a lot, i hope this merge will contribute a lot to the community. He has supported a lot of wan projects, however almost none of them can be used in native workflow |
|
Please quick review this PR , it took too long time |
|
@kijai DynamicCombo will be officially supported once #11345 gets merged after next stable. Even before getting merged, DynamicCombo is still 'secretly' available and works with sufficiently new frontend if you import from |
This is somewhat work in progress PR that adds initial basic Multi/InfiniteTalk support:
model_patchobjectsI've chosen not to do the MultiTalk long generation method as I find it redundant due to InfiniteTalk, MultiTalk model itself will load, but further functionality need to be added to add the audio control to other models such as VACE.
As I'm not fully versed in how everything works yet, especially memory management wise, I'll be relying on feedback from @comfyanonymous and @Kosinkadink to finish this PR.