This repository is an official implementation of TSTMotion BASELINE. The COMPLETE CODE is being cleaned and will be released soon.
├── datasets
│ ├── demo_scene
│ │ ├── ScanNet0604
│ │ │ ├── detection_results.pkl
│ │ │ ├── scene0604_00_vh_clean.ply
│ ├── HumanML3D
│ │ ├── new_joint_vecs
│ │ ├── new_joints
│ ├── prompt
│ ├── smplx
│ │ ├── SMPLX_NEUTRAL.npz
│ │ ├── SMPLX_NEUTRAL.pkl
├── OmniControl
│ ├── glove
│ ├── t2m
│ ├── save
│ │ ├── omnicontrol_ckpt
│ │ │ ├── model_humanml3d.pt
├── scripts
├── utils
conda env create -f environment.yml
conda activate tstmotion
python -m spacy download en_core_web_sm
pip install git+https://github.com/openai/CLIP.git
The results should be placed in OmniControl folder as shown in Folder Structure, including glove,t2m and smplx.
cd OmniControl
bash prepare/download_smpl_files.sh
bash prepare/download_glove.sh
bash prepare/download_t2m_evaluators.sh
Follow the instructions in HumanML3D, then copy the results as shown in Folder Structure.
The results should be placed as shown in Folder Structure, including model_humanml3d.pt.
cd /OmniControl/save
gdown --id 1oTkBtArc3xjqkYD6Id7LksrTOn3e1Zud
unzip omnicontrol_ckpt.zip -d .
You can download the SMPLX for visualization, including SMPLX_NEUTRAL.npz and SMPLX_NEUTRAL.pkl.
You can download the prepared segmentation results in folder demo_scene from Google Drive.
Notably, the pointcloud of scene0604_00_vh_clean.ply must download from ScanNet.
You can change the scene and the caption of the motion in demo.sh.
Notably, before the motion generation, you must input your openai's api key.
cd /scripts
bash demo.sh
bash visualize.sh
Some codes are borrowed from MDM, HUMANISE, OmniControl.
If you find TSTMotion useful for your work please cite:
@article{ ,
author = {},
title = {TSTMotion: Training-free Scene-aware Text-to-motion Generation},
journal = {},
year = {2024},
}