-
Notifications
You must be signed in to change notification settings - Fork 736
Add Start and End Frame control,works great! #167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
nice work! |
NICE |
Amazing work I have implemented into my app and working amazing My app tutorial : https://youtu.be/HwMngohRmHg?si=ImYvFey-R030fbiM 250420_194342_833_7054_seed269740392_37.mp4 |
Interpolating between the start and the end conditionings depending on the frame number can be cool as well (actually, such interpolation is a unique feature of next frame prediction models), it will require some caching though, but it's for the future maybe :) Anyway, amazing work, works like a charm! 🚀 |
I tried doing the same and if the images are different enough the effect is bad. All segments but the first would animate the last frame, and the first segment would quickly turn from the first frame to last. Maybe I did something wrong, can you try with a longer video like 10-15 seconds? |
It could have been great but for now this only works for the first 1 second (all the transformation is done on 1st second and all the other seconds are just static). |
I think maybe I should create a repo like |
@lllyasviel - All Issues and PRs here seem related to the software/studio - Maybe create a new repo for the Research? |
That was my experience as well, I described my findings here: #32 (comment) |
I have seen you in many issues and you are always trying to copy others' open source ideas to make money,don't you feel ashamed? |
for long video, you will need more middle frame or change the schedule method. still under study... |
Nice work!! If we could use controlnet also, this make a difference in guiding from a existing video |
@@ -184,7 +200,8 @@ def worker(input_image, prompt, n_prompt, seed, total_second_length, latent_wind | |||
history_pixels = None | |||
total_generated_latent_frames = 0 | |||
|
|||
latent_paddings = reversed(range(total_latent_sections)) | |||
# 将迭代器转换为列表 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
English comment would be great
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
中文评论才是精华
it would be cool to set keyframes for any frame, not just start/end. |
效果非常好! |
If we have start and end frames when we can just split video generation in sections... IMHO the next step will be batch generation and joining. |
You are literally saying you implemented someone else's work and are essentially selling it. Have you no shame? |
Great job, @TTPlanetPig and hhy! It worked great for me. One issue that I had was that the size of the output videos was smaller than the i2v default output files with the same settings. I turned down the MPEG compression to zero, which made me feel a bit better. Thank you, @lllyasviel, for Framepack! |
by simply replace the last predicted image by inserted end image. now we can easily control the video to what we want. the Pincer attack from TeneT now is complete.
Thanks my friend "hhy" helped me on the coding as I only have the idea but he is much better on coding!
4.20.-1.mp4
250420_204753_497_5473_10.mp4