Skip to content

Official implementation of the paper "GenCompositor: Generative Video Compositing with Diffusion Transformer"

License

TencentARC/GenCompositor

Repository files navigation

GenCompositor

This repository contains the implementation of the paper "GenCompositor: Generative Video Compositing with Diffusion Transformer"

Keywords: Diffusion Models, Video Editing, Video Compositing

Shuzhou Yang1, Xiaoyu Li2‡, Xiaodong Cun3, Guangzhi Wang2, Lingen Li4, Ying Shan2, Jian Zhang1✉
1SECE, Peking University 2ARC Lab, Tencent PCG 3GVC Lab, Great Bay University 4The Chinese University of Hong Kong Project Lead Corresponding Author

   

📖 Table of Contents

🔥 Update Log

  • [2025/9/2] 📢 📢 GenCompositor is released, the first generative video compositing framework with ERoPE for layout-unaligned video control.

📌 TODO

  • Release inference code
  • Release gradio demo
  • Release GenCompositor checkpoints (based on CogVideoX-5B)
  • Release training code
  • Dataset curation code

🛠️ Method Overview

Video compositing combines live-action footage to create video production, serving as a crucial technique in video creation and film production. Traditional pipelines require intensive labor efforts and expert collaboration, resulting in lengthy production cycles and high manpower costs. To address this issue, we automate this process with generative models, called generative video compositing. This new task strives to adaptively inject identity and motion information of foreground video to the target video in an interactive manner, allowing users to customize the size, motion trajectory, and other attributes of the dynamic elements added in final video. Specifically, we designed a novel Diffusion Transformer (DiT) pipeline based on its intrinsic properties. To maintain consistency of the target video before and after editing, we revised a light-weight DiT-based background preservation branch with masked token injection. As to inherit dynamic elements from other sources, a DiT fusion block is proposed using full self-attention, along with a simple yet effective foreground augmentation for training. Besides, for fusing background and foreground videos with different layouts based on user control, we developed a novel position embedding, named Extended Rotary Position Embedding (ERoPE). Finally, we curated a dataset comprising 61K sets of videos for our new task, called VideoComp. This data includes complete dynamic elements and high-quality target videos. Experiments demonstrate that our method effectively realizes generative video compositing, outperforming existing possible solutions in fidelity and consistency.

🚀 Getting Started

Environment Requirement 🌍
  • CUDA 12.4
  • PyTorch 2.5.0
  • Python 3.10.16
  • VRAM >= 40GB

Clone the repo:

git clone https://github.com/TencentARC/GenCompositor.git

We recommend you first use conda to create virtual environment, and install needed libraries. For example:

conda create -n gencompositor python=3.10 -y
conda activate gencompositor
pip install -r requirements.txt

Then, you can install diffusers (implemented in this repo) with:

cd ./diffusers
pip install -e .

After that, you can install required ffmpeg thourgh:

conda install -c conda-forge ffmpeg -y
Checkpoints Download ⬇️

Checkpoints of GenCompositor can be downloaded from here. The ckpts folder contains

  • GenCompositor pretrained checkpoints for CogVideoX-5b-I2V
  • pretrinaed CogVideoX-5b-I2V checkpoint from HuggingFace.

You can download the checkpoints, and put the checkpoints to the ckpts folder by:

git lfs install
git clone https://huggingface.co/TencentARC/GenCompositor
mv GenCompositor ckpts

You also need to download the base model CogVideoX-5B-I2V by:

git lfs install
cd ckpts
git clone https://huggingface.co/THUDM/CogVideoX-5b-I2V

Finally, you need to download checkpoints of sam2 for foreground segmentation:

cd ckpts
wget -c https://dl.fbaipublicfiles.com/segment_anything_2/092824/sam2.1_hiera_large.pt

The checkpoints structure should be like:

|-- ckpts
    |-- branch
        |-- config.json
        |-- diffusion_pytorch_model.safetensors
    |-- CogVideoX-5b-I2V
        |-- scheduler
        |-- transformer
        |-- vae
        |-- ...
    |-- model
        |-- pytorch_model
        |-- transformer
        |-- latest
        |-- random_states_0.pkl
        |-- random_states_1.pkl
        |-- random_states_2.pkl
        |-- random_states_3.pkl
        |-- random_states_4.pkl
        |-- random_states_5.pkl
        |-- random_states_6.pkl
        |-- random_states_7.pkl
        |-- scheduler.bin
        |-- zero_to_fp32.py
    |-- sam2.1_hiera_large.pt

🏃🏼 Running Scripts

Gradio Demo 🤗

You can easily utlize our model through gradio demo:

cd app
CUDA_VISIBLE_DEVICES=0 python app.py

Open your browser at http://localhost:7860

Inference 📜

Alternatively, you can also inference with the script:

cd infer

# Adjust the resolution, frame number of both background and foreground videos to the default value, and specify foreground element to be injected.
bash preprocess_bg_fg_videos.sh

# Drag trajectory (We recommend to perform this interactive step on your local host❗️)
##(1)In the pop-up window, hold down the left mouse button and drag to specify the movement trajectory. (2)Press the "ESC" key to exit and automatically output final binary mask video.
python usr.py --video_path "../assets/bg/source/sora15.mp4" --file_path "../assets/usr_mask/sora15_traj.txt"
## Getting mask video (sora15_traj.mp4) based on the specified trajectory txt file (sora15_traj.txt).
python get_movemask.py --rescale 0.4 --fg_video_path "../assets/fg/element/fg_butterfly.mp4" --video_path "../assets/bg/source/sora15.mp4" --file_path "../assets/usr_mask/sora15_traj.txt" --usr_mask_path "../assets/usr_mask/sora15_traj.mp4"

# Video compositing (GenCompositor starts work❗️)
python testinput.py --fg_video_path "../assets/fg/element/fg_butterfly.mp4" --video_path "../assets/bg/source/sora15.mp4" --mask_path "../assets/usr_mask/sora15_traj.mp4" --output_path "./output.mp4" --num_inference_steps 20

🤝🏼 Cite Us

@article{GenCompositor,
  title={GenCompositor: Generative Video Compositing with Diffusion Transformer},
  author={Shuzhou Yang and Xiaoyu Li and Xiaodong Cun and Guangzhi Wang and Lingen Li and Ying Shan and Jian Zhang},
  journal={arXiv preprint arXiv:2509.02460},
  year={2025}
}

💖 Acknowledgement

Our code is modified based on diffusers and CogVideoX, thanks to all the contributors!

📄 License

We are pleased to support the open source community. For complete license details, see LICENSE and NOTICE.

📬 Contact

For any questions, feel free to email.

🌟 Star History

Star History Chart

About

Official implementation of the paper "GenCompositor: Generative Video Compositing with Diffusion Transformer"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages