Skip to content

Official code for our AAAI25 oral๐Ÿ‘‘ paper Harmonious Group Choreography with Trajectory-Controllable Diffusion โ€” hope you enjoy exploring it! ๐Ÿ˜Š

License

Notifications You must be signed in to change notification settings

Da1yuqin/TCDiff

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

32 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

Harmonious Group Choreography with Trajectory-Controllable Diffusion

Code for AAAI 2025 (oral) paper "Harmonious Group Choreography with Trajectory-Controllable Diffusion"

โœจ If you find this project useful, a โญ๏ธ would make our day! โœจ.

Environment Setup

To set up the environment, follow these steps:

# Create a new conda environment
conda create -n tcdiff python=3.9
conda activate tcdiff

# Install PyTorch with CUDA support
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.7 -c pytorch -c nvidia

# Configure and install PyTorch3D
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch3d/
conda install -c fvcore -c iopath -c conda-forge fvcore iopath 
conda install pytorch3d

# Install remaining requirements
pip install -r requirements.txt

Data Preprocess

  1. Please download AIOZ-GDance from here and place it in the ./data/AIOZ_Dataset path.

  2. Run the Preprocessing Script:

cd data/
python create_dataset.py

Training

Train the model using the following commands:

  CUDA_VISIBLE_DEVICES=1,2,3 accelerate launch train.py
  • During training, the code produces a minimalistic stick figure representation to help visualize model performance. As shown below.
  cd TrajDecoder
  python train_traj.py --device cuda:0
  • Similarly, when training trajectories using the Dance-Beat Navigator (DBN) module, a trajectory video will be generated. As shown below.
  • Note: All videos play muted by default, click the volume icon to unmute. If videos fail to play due to network issues, please refer to the files in the /assets folder: vis1.mp4, vis2.mp4, traj1.mp4, traj2.mp4.

Generate results

Our codebase provides two evaluation modes for generating results with the trained model:

  • Validation Without Trajectory Model
python train.py --mode "val_without_TrajModel" 

When the trajectory model is not yet fully trained, this mode uses trajectories directly from the dataset for testing purposes. Both training and test splits are included to assess overall performance without introducing errors from an incomplete trajectory model.

  • Testing with Generated Trajectories
python train.py --mode "test" 

These options allow flexible benchmarking of the model during different stages of development, ensuring both partial and full-pipeline evaluations.

Visulization in Blender

We developed automated scripts to transform the generated SMPL motion data into beautiful 3D animations rendered in Blender, replicating the high-quality visuals featured on our project page. The entire rendering pipeline, from data preparation to Blender rendering, is fully scripted for ease of use and reproducibility. For detailed steps, please refer to the Blender_Visulization/ Rendering Pipeline documentation. โœจ Your star is the greatest encouragement for our work. โœจ vis

Acknowledgment

The concept of TCDiff is inspired by solo-dancer generation model EDGE. We sincerely appreciate the efforts of these teams for their contributions to open-source research and development.

Contributing

Heartfelt thanks to my amazing collaborator for making this project shine! ๐Ÿ’ก๐Ÿ’ช๐ŸŒŸ

Wanlu Zhu
Wanlu Zhu
๐Ÿธ๐Ÿ’ง๐Ÿˆโ€โฌ›

Citation

We present TCDiff++, an end-to-end extension of TCDiff with improved long-term generation performance and simplified training (also converges faster). Stay tuned!

@article{dai2025tcdiff++,
  title={TCDiff++: An End-to-end Trajectory-Controllable Diffusion Model for Harmonious Music-Driven Group Choreography},
  author={Dai, Yuqin and Zhu, Wanlu and Li, Ronghui and Li, Xiu and Zhang, Zhenyu and Li, Jun and Yang, Jian},
  journal={arXiv preprint arXiv:2506.18671},
  year={2025}
}
@inproceedings{dai2025harmonious,
  title={Harmonious Music-driven Group Choreography with Trajectory-Controllable Diffusion},
  author={Dai, Yuqin and Zhu, Wanlu and Li, Ronghui and Ren, Zeping and Zhou, Xiangzheng and Ying, Jixuan and Li, Jun and Yang, Jian},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  volume={39},
  number={3},
  pages={2645--2653},
  year={2025}
}

About

Official code for our AAAI25 oral๐Ÿ‘‘ paper Harmonious Group Choreography with Trajectory-Controllable Diffusion โ€” hope you enjoy exploring it! ๐Ÿ˜Š

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages