"A astronaut, brown background" | "A Hulk, on the sea" |
"A man in the park, Van Gogh style" | "The Stormtroopers, on the beach" |
TL;DR: We tune 2D stable-diffusion to generate the character videos from pose and text description.
CLICK for full abstract
Generating text-editable and pose-controllable character videos have an imperious demand in creating various digital human. Nevertheless, this task has been restricted by the absence of a comprehensive dataset featuring paired video-pose captions and the generative prior models for videos. In this work, we design a novel two-stage training scheme that can utilize easily obtained datasets (i.e., image pose pair and pose-free video) and the pre-trained text-to-image (T2I) model to obtain the pose-controllable character videos. Specifically, in the first stage, only the keypoint-image pairs are used only for a controllable textto-image generation. We learn a zero-initialized convolutional encoder to encode the pose information. In the second stage, we finetune the motion of the above network via a pose-free video dataset by adding the learnable temporal self-attention and reformed cross-frame self-attention blocks. Powered by our new designs, our method successfully generates continuously pose-controllable character videos while keeps the editing and concept composition ability of the pre-trained T2I model. The code and models will be made publicly available.
- 2023.04.03 Release Paper and Project page!
- Release the code, config and checkpoints for teaser
- Memory and runtime profiling
- Hands-on guidance of hyperparameters tuning
- Colab
- Release configs for other result and in-the-wild dataset
- hugging-face: inprogress
- Release more application
We show results regarding various pose sequences and text prompts.
Note mp4 and gif files in this github page are compressed. Please check our Project Page for mp4 files of original video results.
"A Robot, in Sahara desert" | "A Iron man, on the beach" | "A panda, son the sea" |
"A man in the park, Van Gogh style" | "The fireman in the beach" | "Batman, brown background" |
"A Hulk, on the sea" | "A superman, in the forest" | "A Iron man, in the snow" |
"A man in the forest, Minecraft." | "A man in the sea, at sunset" | "James Bond, grey simple background" |
"A Panda on the sea." | "A Stormtrooper on the sea" | "A astronaut on the moon" |
"A astronaut on the moon." | "A Robot in Antarctica." | "A Iron man on the beach." |
"The Obama in the desert" | "Astronaut on the beach." | "Iron man on the snow" |
"A Stormtrooper on the sea" | "A Iron man on the beach." | "A astronaut on the moon." |
"Astronaut on the beach" | "Superman on the forest" | "Iron man on the beach" |
"Astronaut on the beach" | "Robot in Antarctica" | "The Stormtroopers, on the beach" |
@misc{ma2023follow,
title={Follow Your Pose: Pose-Guided Text-to-Video Generation using Pose-Free Videos},
author={Yue Ma and Yingqing He and Xiaodong Cun and Xintao Wang and Ying Shan and Xiu Li and Qifeng Chen},
year={2023},
eprint={2304.01186},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
This repository borrows heavily from Tune-A-Video, FateZero and prompt-to-prompt. thanks the authors for sharing their code and models.
This is the codebase for our research work. We are still working hard to update this repo and more details are coming in days. If you have any questions or ideas to discuss, feel free to contact Yue Ma or Yingqing He or Xiaodong Cun.