Skip to content

SAM4Dcap: Training-free Biomechanical Twin System from Monocular Video

Notifications You must be signed in to change notification settings

wanglihx/SAM4Dcap-core

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

73 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SAM4Dcap: Training-free Biomechanical Twin System from Monocular Video

Paper

Paper has been submitted to arXiv: https://arxiv.org/abs/2602.13760

本地图片描述

What this project does

This project performs training-free biomechanics analysis from monocular video.


Cam 1

Visualisation

Quick demo

Monocular video results adapted to the opencap backend:

cd SAM4Dcap-core/SAM4Dcap/output_viz
python -m http.server 8088 --bind 127.0.0.1
open http://127.0.0.1:8088/webviz_pipeline2/

Prepare

Hardware

  • RTX PRO 6000 (96GB)
  • 22 vCPU Intel(R) Xeon(R) Platinum 8470Q

Environment

Environment paths and source projects

We compiled with CUDA for the GPU architecture used in our experiments (sm_120) using these versions:

  • MHRtoSMPL: Python 3.12.12; PyTorch 2.8.0+cu128; CUDA 12.8
  • body4d: Python 3.12.12; PyTorch 2.8.0+cu128; CUDA 12.8
  • opencap: Python 3.9.25; PyTorch 2.8.0+cu128; CUDA 12.8
  • opensim: Python 3.10.19; PyTorch 2.9.1+cu128; CUDA 12.8

We recommend adapting the setup according to your GPU configuration (at least 24 GB of memory) and the official library environments mentioned above.

Full environment files will be uploaded to a cloud drive later.

Codebases and models

We integrated six repositories

SAM4Dcap-core/Readme_modified/README.md
SAM4Dcap-core/Addbiomechanics/fronted

Models

SMPL model download: https://smpl.is.tue.mpg.de/ Convert to the chumpy-free version with:

python SAM4Dcap-core/MHRtoSMPL/convert_smpl_chumpy_free.py

Model path checkpoints:

SAM4Dcap-core/Readme_modified/checkpoints.txt

One-click run + visualization

Double-check paths before running:

SAM4Dcap-core/Readme_modified/check_again.txt
  • Adapt AddBiomechanic with 105 keypoints (Monocular Video):
bash SAM4Dcap-core/SAM4Dcap/pipeline1.sh

local (Linux): http://localhost:3088/

online: https://app.addbiomechanics.org/
  • Adapt opencap with 43 keypoints (Monocular Video):
bash SAM4Dcap-core/SAM4Dcap/pipeline2.sh
  • opencap reproduction (Binocular Video):
bash SAM4Dcap-core/SAM4Dcap/opencap.sh

Monocular Video: http://127.0.0.1:8093/webviz/

Binocular Video: http://127.0.0.1:8090/web/webviz/
  • Tool for custom keypoints:
bash SAM4Dcap-core/SAM4Dcap/select.sh
1.26.3.-1.mp4
  • Align:
cd SAM4Dcap-core/SAM4Dcap/align/webviz_compare
python -m http.server 8092 --bind 127.0.0.1
align-1.mp4

Quick setup

Because of GitHub repository size limits, we will upload the complete project code and environments to a cloud drive. Contact wangli1@stu.scu.edu.cn to reproduce the project more easily.

Next

We will further optimize pipeline1 and pipeline2 to achieve more accurate training-free IK solving and GRF analysis.

Acknowledgements

Thanks to

Citation

If you use this project in your research, please cite it as follows:

BibTeX

@misc{wang2026sam4dcaptrainingfreebiomechanicaltwin,
      title={SAM4Dcap: Training-free Biomechanical Twin System from Monocular Video}, 
      author={Li Wang and HaoYu Wang and Xi Chen and ZeKun Jiang and Kang Li and Jian Li},
      year={2026},
      eprint={2602.13760},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url={https://arxiv.org/abs/2602.13760}, 
}

About

SAM4Dcap: Training-free Biomechanical Twin System from Monocular Video

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •