The pytorch implementation for AAAI2024 paper "Full-Body Motion Reconstruction with Sparse Sensing from Graph Perspective".
- Download the dataset AMASS from AMASS.
- Download the body model from http://smpl.is.tue.mpg.de and placed them in
support_data/body_models
directory of this repository. - Run
prepare_data.py
to prepare data from VR device. The data is split referring to the folderdata_split
.
For training, please run:
python train.py
For testing, please run:
python test.py
Click Pretrained Models to download our pretrained model, and put it into results/Avatar/models/
.
@article{Yao_Wu_Yi_2024,
title={Full-Body Motion Reconstruction with Sparse Sensing from Graph Perspective},
volume={38},
url={https://ojs.aaai.org/index.php/AAAI/article/view/28483},
DOI={10.1609/aaai.v38i7.28483},
number={7},
journal={Proceedings of the AAAI Conference on Artificial Intelligence},
author={Yao, Feiyu and Wu, Zongkai and Yi, Li},
year={2024},
month={Mar.},
pages={6612-6620}
}
This project is released under the MIT license. We refer to the code framework in AvatarPoser and SCI-NET for network training.