This repository contains an simple and unofficial implementation of Animate Anyone. This project is built upon magic-animate and AnimateDiff.
Same as magic-animate.
We are collecting video data for human motion, the TikTok dataset is too small for training a sufficiently robust model, if you are interested in collaborating with us, please email guoqin@stu.pku.edu.cn
- Release Training Code.
- Release Inference Code and unofficial pre-trained weights.
- Data Release (Within Legal Boundaries): Efforts to collect and refine the dataset for further training and improvements are ongoing, with the intention to release it publicly, adhering to legal constraints.
torchrun --nnodes=1 --nproc_per_node=1 train.py --config configs/training/train_stage_1.yaml
torchrun --nnodes=1 --nproc_per_node=1 train.py --config configs/training/train_stage_2.yaml
Special thanks to the original authors of the Animate Anyone project and the contributors to the magic-animate and AnimateDiff repository for their open research and foundational work that inspired this unofficial implementation.