Code for AAAI 2025 (oral) paper "Harmonious Group Choreography with Trajectory-Controllable Diffusion"
โจ If you find this project useful, a โญ๏ธ would make our day! โจ.
To set up the environment, follow these steps:
# Create a new conda environment
conda create -n tcdiff python=3.9
conda activate tcdiff
# Install PyTorch with CUDA support
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.7 -c pytorch -c nvidia
# Configure and install PyTorch3D
conda config --add channels https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch3d/
conda install -c fvcore -c iopath -c conda-forge fvcore iopath
conda install pytorch3d
# Install remaining requirements
pip install -r requirements.txt-
Please download AIOZ-GDance from here and place it in the
./data/AIOZ_Datasetpath. -
Run the Preprocessing Script:
cd data/
python create_dataset.pyTrain the model using the following commands:
CUDA_VISIBLE_DEVICES=1,2,3 accelerate launch train.py- During training, the code produces a minimalistic stick figure representation to help visualize model performance. As shown below.
|
|
cd TrajDecoder
python train_traj.py --device cuda:0- Similarly, when training trajectories using the Dance-Beat Navigator (DBN) module, a trajectory video will be generated. As shown below.
|
|
- Note: All videos play muted by default, click the volume icon to unmute. If videos fail to play due to network issues, please refer to the files in the
/assetsfolder:vis1.mp4, vis2.mp4, traj1.mp4, traj2.mp4.
Our codebase provides two evaluation modes for generating results with the trained model:
- Validation Without Trajectory Model
python train.py --mode "val_without_TrajModel" When the trajectory model is not yet fully trained, this mode uses trajectories directly from the dataset for testing purposes. Both training and test splits are included to assess overall performance without introducing errors from an incomplete trajectory model.
- Testing with Generated Trajectories
python train.py --mode "test" These options allow flexible benchmarking of the model during different stages of development, ensuring both partial and full-pipeline evaluations.
We developed automated scripts to transform the generated SMPL motion data into beautiful 3D animations rendered in Blender, replicating the high-quality visuals featured on our project page. The entire rendering pipeline, from data preparation to Blender rendering, is fully scripted for ease of use and reproducibility. For detailed steps, please refer to the Blender_Visulization/ Rendering Pipeline documentation. โจ Your star is the greatest encouragement for our work. โจ

The concept of TCDiff is inspired by solo-dancer generation model EDGE. We sincerely appreciate the efforts of these teams for their contributions to open-source research and development.
Heartfelt thanks to my amazing collaborator for making this project shine! ๐ก๐ช๐
|
Wanlu Zhu
๐ธ๐ง๐โโฌ
|
We present TCDiff++, an end-to-end extension of TCDiff with improved long-term generation performance and simplified training (also converges faster). Stay tuned!
@article{dai2025tcdiff++,
title={TCDiff++: An End-to-end Trajectory-Controllable Diffusion Model for Harmonious Music-Driven Group Choreography},
author={Dai, Yuqin and Zhu, Wanlu and Li, Ronghui and Li, Xiu and Zhang, Zhenyu and Li, Jun and Yang, Jian},
journal={arXiv preprint arXiv:2506.18671},
year={2025}
}
@inproceedings{dai2025harmonious,
title={Harmonious Music-driven Group Choreography with Trajectory-Controllable Diffusion},
author={Dai, Yuqin and Zhu, Wanlu and Li, Ronghui and Ren, Zeping and Zhou, Xiangzheng and Ying, Jixuan and Li, Jun and Yang, Jian},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={39},
number={3},
pages={2645--2653},
year={2025}
}




