NeurIPS 2024
This repository represents the official implementation of the paper titled "Collaborative Video Diffusion: Consistent Multi-video Generation with Camera Control".
This repository is still under construction, many updates will be applied in the near future.
Zhengfei Kuang*, Shengqu Cai*, Hao He, Yinghao Xu, Hongsheng Li, Leonidas Guibas, Gordon Wetzstein
Research on video generation has recently made tremendous progress, enabling high-quality videos to be generated from text prompts or images. Adding control to the video generation process is an important goal moving forward and recent approaches that condition video generation models on camera trajectories make strides towards it. Yet, it remains challenging to generate a video of the same scene from multiple different camera trajectories. Solutions to this multi-video generation problem could enable large-scale 3D scene generation with editable camera trajectories, among other applications. We introduce collaborative video diffusion (CVD) as an important step towards this vision. The CVD framework includes a novel cross-video synchronization module that promotes consistency between corresponding frames of the same video rendered from different camera poses using an epipolar attention mechanism. Trained on top of a state-of-the-art camera-control module for video generation, CVD generates multiple videos rendered from different camera trajectories with significantly better consistency than baselines, as shown in extensive experiments.
Clone the repository (requires git):
git clone https://github.com/CVD
cd CVD
For the environment, run:
conda env create -f environment.yaml
conda activate cameractrl
pip install torch==2.2+cu118 torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118
pip install -r requirements.txt
We require AnimateDiff and CameraCtrl to be built:
- DownLoad Stable Diffusion V1.5 (SD1.5) from HuggingFace.
- DownLoad the checkpoints of AnimatediffV3 (ADV3) adaptor and motion module from AnimateDiff.
- Run
tools/merge_lora2unet.py
to merge the ADV3 adaptor weights into SD1.5 unet and save results to new subfolder (like,unet_webvidlora_v3
) under SD1.5 folder. - DownLoad the pretrained camera control model from Google Drive.
- DownLoad Lora trained on Realestate10K at Google Drive.
- Download our synchronization module from Google Drive.
Depends on where you store the data and checkpoints, you may need to change a few things in the configuration yaml
file. We marked out the important lines you may want to take a look at in our exempler configurations files.
We provide two scripts to sample random consensus videos, namely the simplest two-video generation, and the advanced multi-video and complex trajectory video generation.
TBD
TBD
TBD
Please cite our paper:
@inproceedings{kuang2024cvd,
author={Kuang, Zhengfei and Cai, Shengqu and He, Hao and Xu, Yinghao and Li, Hongsheng and Guibas, Leonidas and Wetzstein, Gordon.},
title={Collaborative Video Diffusion: Consistent Multi-video Generation with Camera Control},
booktitle={arXiv},
year={2024}
}