Skip to content

lllyasviel/FramePack

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FramePack

Official implementation and desktop software for "Packing Input Frame Context in Next-Frame Prediction Models for Video Generation".

Links: Paper, Project Page

FramePack is a next-frame (next-frame-section) prediction neural network structure that generates videos progressively.

FramePack compresses input contexts to a constant length so that the generation workload is invariant to video length.

FramePack can process a very large number of frames with 13B models even on laptop GPUs.

FramePack can be trained with a much larger batch size, similar to the batch size for image diffusion training.

Video diffusion, but feels like image diffusion.

Notes

Note that this GitHub repository is the only official FramePack website. We do not have any web services. All other websites are spam and fake, including but not limited to framepack.co, frame_pack.co, framepack.net, frame_pack.net, framepack.ai, frame_pack.ai, framepack.pro, frame_pack.pro, framepack.cc, frame_pack.cc,framepackai.co, frame_pack_ai.co, framepackai.net, frame_pack_ai.net, framepackai.pro, frame_pack_ai.pro, framepackai.cc, frame_pack_ai.cc, and so on. Again, they are all spam and fake. Do not pay money or download files from any of those websites.

The team is on leave between April 21 and 29. PR merging will be delayed.

Requirements

Note that this repo is a functional desktop software with minimal standalone high-quality sampling system and memory management.

Start with this repo before you try anything else!

Requirements:

  • Nvidia GPU in RTX 30XX, 40XX, 50XX series that supports fp16 and bf16. The GTX 10XX/20XX are not tested.
  • Linux or Windows operating system.
  • At least 6GB GPU memory.

To generate 1-minute video (60 seconds) at 30fps (1800 frames) using 13B model, the minimal required GPU memory is 6GB. (Yes 6 GB, not a typo. Laptop GPUs are okay.)

About speed, on my RTX 4090 desktop it generates at a speed of 2.5 seconds/frame (unoptimized) or 1.5 seconds/frame (teacache). On my laptops like 3070ti laptop or 3060 laptop, it is about 4x to 8x slower. Troubleshoot if your speed is much slower than this.

In any case, you will directly see the generated frames since it is next-frame(-section) prediction. So you will get lots of visual feedback before the entire video is generated.

Installation

Windows:

>>> Click Here to Download One-Click Package (CUDA 12.6 + Pytorch 2.6) <<<

After you download, you uncompress, use update.bat to update, and use run.bat to run.

Note that running update.bat is important, otherwise you may be using a previous version with potential bugs unfixed.

image

Note that the models will be downloaded automatically. You will download more than 30GB from HuggingFace.

Linux:

We recommend having an independent Python 3.10.

pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu126
pip install -r requirements.txt

To start the GUI, run:

python demo_gradio.py

Note that it supports --share, --port, --server, and so on.

The software supports PyTorch attention, xformers, flash-attn, sage-attention. By default, it will just use PyTorch attention. You can install those attention kernels if you know how.

For example, to install sage-attention (linux):

pip install sageattention==1.0.6

However, you are highly recommended to first try without sage-attention since it will influence results, though the influence is minimal.

GUI

ui

On the left you upload an image and write a prompt.

On the right are the generated videos and latent previews.

Because this is a next-frame-section prediction model, videos will be generated longer and longer.

You will see the progress bar for each section and the latent preview for the next section.

Note that the initial progress may be slower than later diffusion as the device may need some warmup.

Sanity Check

Before trying your own inputs, we highly recommend going through the sanity check to find out if any hardware or software went wrong.

Next-frame-section prediction models are very sensitive to subtle differences in noise and hardware. Usually, people will get slightly different results on different devices, but the results should look overall similar. In some cases, if possible, you'll get exactly the same results.

Image-to-5-seconds

Download this image:

Copy this prompt:

The man dances energetically, leaping mid-air with fluid arm swings and quick footwork.

Set like this:

(all default parameters, with teacache turned off) image

The result will be:

0.mp4
Video may be compressed by GitHub

Important Note:

Again, this is a next-frame-section prediction model. This means you will generate videos frame-by-frame or section-by-section.

If you get a much shorter video in the UI, like a video with only 1 second, then it is totally expected. You just need to wait. More sections will be generated to complete the video.

Know the influence of TeaCache and Quantization

Download this image:

Copy this prompt:

The girl dances gracefully, with clear movements, full of charm.

Set like this:

image

Turn off teacache:

image

You will get this:

2.mp4
Video may be compressed by GitHub

Now turn on teacache:

image

About 30% users will get this (the other 70% will get other random results depending on their hardware):

2teacache.mp4
A typical worse result.

So you can see that teacache is not really lossless and sometimes can influence the result a lot.

We recommend using teacache to try ideas and then using the full diffusion process to get high-quality results.

This recommendation also applies to sage-attention, bnb quant, gguf, etc., etc.

Image-to-1-minute

The girl dances gracefully, with clear movements, full of charm.

image

Set video length to 60 seconds:

image

If everything is in order you will get some result like this eventually.

60s version:

3.mp4
Video may be compressed by GitHub

6s version:

7.mp4
Video may be compressed by GitHub

More Examples

Many more examples are in Project Page.

Below are some more examples that you may be interested in reproducing.


The girl dances gracefully, with clear movements, full of charm.

image

4.mp4
Video may be compressed by GitHub

The girl suddenly took out a sign that said “cute” using right hand

image

5.mp4
Video may be compressed by GitHub

The girl skateboarding, repeating the endless spinning and dancing and jumping on a skateboard, with clear movements, full of charm.

image

6.mp4
Video may be compressed by GitHub

The girl dances gracefully, with clear movements, full of charm.

image

8.mp4
Video may be compressed by GitHub

The man dances flamboyantly, swinging his hips and striking bold poses with dramatic flair.

image

9.mp4
Video may be compressed by GitHub

The woman dances elegantly among the blossoms, spinning slowly with flowing sleeves and graceful hand movements.

image

10.mp4
Video may be compressed by GitHub

The young man writes intensely, flipping papers and adjusting his glasses with swift, focused movements.

image

11.mp4
Video may be compressed by GitHub

Prompting Guideline

Many people would ask how to write better prompts.

Below is a ChatGPT template that I personally often use to get prompts:

You are an assistant that writes short, motion-focused prompts for animating images.

When the user sends an image, respond with a single, concise prompt describing visual motion (such as human activity, moving objects, or camera movements). Focus only on how the scene could come alive and become dynamic using brief phrases.

Larger and more dynamic motions (like dancing, jumping, running, etc.) are preferred over smaller or more subtle ones (like standing still, sitting, etc.).

Describe subject, then motion, then other things. For example: "The girl dances gracefully, with clear movements, full of charm."

If there is something that can dance (like a man, girl, robot, etc.), then prefer to describe it as dancing.

Stay in a loop: one image in, one motion prompt out. Do not explain, ask questions, or generate multiple options.

You paste the instruct to ChatGPT and then feed it an image to get prompt like this:

image

The man dances powerfully, striking sharp poses and gliding smoothly across the reflective floor.

Usually this will give you a prompt that works well.

You can also write prompts yourself. Concise prompts are usually preferred, for example:

The girl dances gracefully, with clear movements, full of charm.

The man dances powerfully, with clear movements, full of energy.

and so on.

Cite

@article{zhang2025framepack,
    title={Packing Input Frame Contexts in Next-Frame Prediction Models for Video Generation},
    author={Lvmin Zhang and Maneesh Agrawala},
    journal={Arxiv},
    year={2025}
}