This is the official PyTorch implementation of the 2025 paper: From Rigging to Waving: 3D-Guided Diffusion for Natural Animation of Hand-Drawn Characters
To download the UniAnimate models, please follow the commands provided in the UniAnimate. After that, you can download our domain-adapted model from Baidu.(pwd: r5do)
Once downloaded, move the checkpoints to the checkpoints/ directory. The model weights will be organized in the ./checkpoints/ directory as follows:
|---- open_clip_pytorch_model.bin
|---- unianimate_16f_32f_non_ema_223000.pth
|---- v2-1_512-ema-pruned.ckpt
└---- rigging2waving_non_ema_00040000.pth
To generate video clips (32 frames), execute the following command:
python inference.py --cfg configs/infer.yamlAll training dataset can be download from Baidu.(pwd: r5do) After downloading, extract the files and place them in the data folder:
└---- rigging2waving_dataset_train
|-- 0a4ff03c912a4e5487e74e05423f3c6d/ # A hand-drawn character
| |-- blender_render/ # Animation sequance
| └---char/ # Reference
To train the domain-adapted model for hand-drawn characters, use the following command:
python train.py --cfg configs/train.yaml- Add long video generation.