Paper has been submitted to arXiv: https://arxiv.org/abs/2602.13760
This project performs training-free biomechanics analysis from monocular video.
Cam 1 |
Visualisation |
Monocular video results adapted to the opencap backend:
cd SAM4Dcap-core/SAM4Dcap/output_viz
python -m http.server 8088 --bind 127.0.0.1
open http://127.0.0.1:8088/webviz_pipeline2/- RTX PRO 6000 (96GB)
- 22 vCPU Intel(R) Xeon(R) Platinum 8470Q
Environment paths and source projects
- SAM4Dcap-core/envs/body4d: https://github.com/gaomingqi/sam-body4d
- SAM4Dcap-core/envs/MHRtoSMPL: https://github.com/facebookresearch/MHR
- SAM4Dcap-core/envs/opencap: https://github.com/opencap-org/opencap-core
- SAM4Dcap-core/envs/opensim: https://github.com/opensim-org/opensim-core
We compiled with CUDA for the GPU architecture used in our experiments (sm_120) using these versions:
- MHRtoSMPL: Python 3.12.12; PyTorch 2.8.0+cu128; CUDA 12.8
- body4d: Python 3.12.12; PyTorch 2.8.0+cu128; CUDA 12.8
- opencap: Python 3.9.25; PyTorch 2.8.0+cu128; CUDA 12.8
- opensim: Python 3.10.19; PyTorch 2.9.1+cu128; CUDA 12.8
We recommend adapting the setup according to your GPU configuration (at least 24 GB of memory) and the official library environments mentioned above.
Full environment files will be uploaded to a cloud drive later.
-
Except for OpenSim (which remains unchanged), all repositories contain modified or new code. For detailed documentation, please refer to:
SAM4Dcap-core/Readme_modified/README.md- The modified versions have been uploaded to the branches of this repository. For Addbiomechanics, please download the file from https://figshare.com/articles/software/fronted/31150132?file=61359931 and extract the file to the
SAM4Dcap-core/Addbiomechanics/frontedSMPL model download: https://smpl.is.tue.mpg.de/ Convert to the chumpy-free version with:
python SAM4Dcap-core/MHRtoSMPL/convert_smpl_chumpy_free.py
Model path checkpoints:
SAM4Dcap-core/Readme_modified/checkpoints.txtDouble-check paths before running:
SAM4Dcap-core/Readme_modified/check_again.txt- Adapt AddBiomechanic with 105 keypoints (Monocular Video):
bash SAM4Dcap-core/SAM4Dcap/pipeline1.sh
local (Linux): http://localhost:3088/ |
online: https://app.addbiomechanics.org/ |
- Adapt opencap with 43 keypoints (Monocular Video):
bash SAM4Dcap-core/SAM4Dcap/pipeline2.sh- opencap reproduction (Binocular Video):
bash SAM4Dcap-core/SAM4Dcap/opencap.sh
Monocular Video: http://127.0.0.1:8093/webviz/ |
Binocular Video: http://127.0.0.1:8090/web/webviz/ |
- Tool for custom keypoints:
bash SAM4Dcap-core/SAM4Dcap/select.sh1.26.3.-1.mp4
- Align:
cd SAM4Dcap-core/SAM4Dcap/align/webviz_compare
python -m http.server 8092 --bind 127.0.0.1align-1.mp4
Because of GitHub repository size limits, we will upload the complete project code and environments to a cloud drive. Contact wangli1@stu.scu.edu.cn to reproduce the project more easily.
We will further optimize pipeline1 and pipeline2 to achieve more accurate training-free IK solving and GRF analysis.
Thanks to
- https://github.com/gaomingqi/sam-body4d
- https://github.com/facebookresearch/MHR
- https://github.com/opencap-org/opencap-core
- https://github.com/opensim-org/opensim-core
- https://github.com/MarilynKeller/SMPL2AddBiomechanics
- https://github.com/keenon/AddBiomechanic
If you use this project in your research, please cite it as follows:
@misc{wang2026sam4dcaptrainingfreebiomechanicaltwin,
title={SAM4Dcap: Training-free Biomechanical Twin System from Monocular Video},
author={Li Wang and HaoYu Wang and Xi Chen and ZeKun Jiang and Kang Li and Jian Li},
year={2026},
eprint={2602.13760},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2602.13760},
}





