CRISP-Real2Sim contains the complete pipeline we use to turn in-the-wild video into human-scene reconstructions and downstream controllers. The steps below walk you through cloning the repo, setting up the environment, downloading the required assets, and running the provided scripts.
git clone --recursive https://github.com/Z1hanW/CRISP-Real2Sim.git
cd CRISP-Real2Simconda create -n crisp python=3.10 -y
conda activate crisppip install torch==2.4.1 torchvision==0.19.1 torchaudio==2.4.1 "xformers>=0.0.27" \
--index-url https://download.pytorch.org/whl/cu124
pip install torch-scatter -f https://data.pyg.org/whl/torch-2.4.1+cu124.html
pip install -r requirements.txtIf you encounter compilation errors (usually on
pytorch3dor CUDA extensions), install a compatible compiler toolchain:conda install -c conda-forge gxx_linux-64=11.
Some dependencies (for rendering, viewers, etc.) are wrapped in helper scripts inside prep/:
cd prep
sh install*
cd ..- SMPL/SMPL-X body models (required for rendering and evaluation)
prep/data/
└── body_models/
├── smpl/SMPL_{GENDER}.pkl
└── smplx/SMPLX_{GENDER}.pkl
- Demo videos and metadata
mkdir -p data
gdown --folder "https://drive.google.com/drive/folders/1k712Oj9StmWXRzSeSMiHZc3LtvsVk2Rw" -O data
gdownis installed viarequirements.txt. Use the-O dataflag so Google Drive folders land underCRISP-Real2Sim/data.
The scripts expect your source sequences to live under either *_videos or *_img folders. Remove that suffix when you feed paths to the scripts.
data/
├── demo_videos/
│ └── walk-kicking/ # example sequence
└── YOUR_videos/
├── seq_a/
└── seq_b/
sh all_gv.sh /path/to/data/demo # not /path/to/data/demo_videos- The script will iterate through every
*_videos(or*_img) folder under the path you supply. - Intermediate data, meshes, and evaluations are written back into the respective sequence directories.
sh vis.sh ${SEQ_NAME}Common flags (see script header for the full list):
--scene_name: override the scene used for rendering.--data_root: custom data directory if not./data.--out_dir: write visualizations to a different folder.
Training utilities are still stabilizing. The current repo contains placeholder scripts under agents/:
- Review
agents/README.mdfor the most recent instructions. - Ensure the dataset generated in Section 3 is available before launching training.
- We recommend starting with a small subset (
--subset N) to validate your setup before scaling.
Agent visualization builds on the same vis.sh infrastructure:
python agents/vis_agent.py \
--checkpoint path/to/checkpoint.pt \
--seq ${SEQ_NAME} \
--out_dir outputs/agent_viz/${SEQ_NAME}Pass --scene_name or --camera_pose_file if your controller requires a custom scene or camera path.