Project Page | Paper | Latest arXiv | Supplementary
UV Volumes for Real-time Rendering of Editable Free-view Human Performance
Yue Chen*, Xuan Wang*, Xingyu Chen, Qi Zhang, Xiaoyu Li, Yu Guo†, Jue Wang, Fei Wang
(* equal contribution,† corresponding author)
CVPR 2023
This repository is an official implementation of UV-Volumes using pytorch.
Please see INSTALL.md for manual installation.
Please see INSTALL.md to download the dataset.
Take the training on sequence 313
as an example.
python3 train_net.py --cfg_file configs/zju_mocap_exp/313.yaml exp_name zju313 resume False output_depth True
You can monitor the training process by Tensorboard.
tensorboard --logdir data/record/UVvolume_ZJU
Take the test on sequence 313
as an example.
python3 run.py --type evaluate --cfg_file configs/zju_mocap_exp/313.yaml exp_name zju313 use_lpips True test.frame_sampler_interval 1 use_nb_mask_at_box True save_img True T_threshold 0.75
Please see INSTALL.md to download and process the dataset.
Take the training on 171204_pose4_sample6
as an example.
python3 train_net.py --cfg_file configs/cmu_exp/p4s6.yaml exp_name p4s6 resume False output_depth True
You can monitor the training process by Tensorboard.
tensorboard --logdir data/record/UVvolume_CMU
Take the test on 171204_pose4_sample6
as an example.
python3 run.py --type evaluate --cfg_file configs/cmu_exp/p4s6.yaml exp_name p4s6 use_lpips True test.frame_sampler_interval 1 use_nb_mask_at_box True save_img True
If you find this code useful for your research, please use the following BibTeX entry.
@inproceedings{chen2023uv,
title={UV Volumes for real-time rendering of editable free-view human performance},
author={Chen, Yue and Wang, Xuan and Chen, Xingyu and Zhang, Qi and Li, Xiaoyu and Guo, Yu and Wang, Jue and Wang, Fei},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={16621--16631},
year={2023}
}
Our code is based on the awesome pytorch implementation of NeuralBody. We appreciate all the contributors.