Skip to content

[AAAI 2023] Tracking and Reconstructing Hand Object Interactions from Point Cloud Sequences in the Wild

Notifications You must be signed in to change notification settings

PKU-EPIC/HOTrack

Repository files navigation

[AAAI 2023] Tracking and Reconstructing Hand Object Interactions from Point Cloud Sequences in the Wild

teaser

Introduction

This is the PyTorch implementation of our paper Tracking and Reconstructing Hand Object Interactions from Point Cloud Sequences in the Wild. For more information, please visit our project page.

Installation

  • Our code has been tested with

    • Ubuntu 20.04
    • CUDA 11.7
    • Python 3.8
    • PyTorch 1.9.1 (NOTE: PyTorch version should <1.11 to compile CUDA code in pointnet_lib)
  • We recommend using Anaconda to create an environment, by running the following:

    conda env create -n hotrack python=3.8
    conda activate hotrack
  • Install pytorch and other dependencies.

    # Just an example for pytorch. Please visit https://pytorch.org/ for more.
    pip install torch==1.9.1+cu111 torchvision==0.10.1+cu111 torchaudio==0.9.1 -f https://download.pytorch.org/whl/torch_stable.html
    pip install -r requirements.txt
  • Compile the CUDA code for PointNet++ backbone.

    cd network/models/pointnet_lib
    python setup.py install

Dataset

  • Download MANO pickle data-structures and save it to third_party/mano/models following Manopth. You also need to install Manopth if you want to play with DexYCB dataset.

  • Download our SimGrasp dataset SimGrasp.zip and our pretrained models pretrained_models.zip from here. Note that SimGrasp_rawimg.zip is not necessary for training and testing, which contains raw RGB and depth images of our dataset.

  • Download HO3D dataset (version 3) from their official website.

  • Download DexYCB dataset from their official website.

  • Link data to the path of your dataset

     ln -s ${DATAFOLDER} data 

Dataset Folder Structure

See here for the organization of dataset folders.

  data
  ├── SimGrasp
  │   ├── img # raw RGB and depth from SimGrasp_rawimg.zip, which is not necessary in training and testing
  │   ├── objs  # in SimGrasp.zip
  │   ├── masks # in SimGrasp.zip
  │   ├── preproc # in SimGrasp.zip
  │   ├── splits  # in SimGrasp.zip
  │   └── SDF # in pretrained_models.zip
  ├── YCB
  │   ├── CatPose2InsPose.npy  # in pretrained_models.zip
  │   ├── models # Download from the DexYCB dataset
  │   └── SDF # in pretrained_models.zip
  ├── HO3D
  │   ├── calibration
  │   ├── train # include both HO3D_v3.zip and HO3D_v3_segmentations_rendered.zip
  │   ├── splits  # in pretrained_models.zip
  │   └── SDF # in pretrained_models.zip
  ├── DexYCB 
  │   ├── 20200709-subject-01
  │   ├── ...
  │   ├── 20201022-subject-10
  │   ├── calibration
  │   ├── splits  # in pretrained_models.zip
  │   └── SDF  # in pretrained_models.zip
  └── exps	# in pretrained_models.zip			

Note that we use Curriculum-DeepSDF to initialize SDF models for objects taking as input the observed point clouds at frame 0 of each testing trajectories. If you only want to use our pretrained SDF models, you don't have to install Curriculum-DeepSDF.

Running

  • To train HandTrackNet. The test results reported during training is based on single-frame instead of tracking a sequence.

      CUDA_VISIBLE_DEVICES=0 python network/train.py --config handtracknet_train_SimGrasp.yml
  • To track hand using HandTrackNet in a sqeuence.

      CUDA_VISIBLE_DEVICES=0 python network/test.py --config handtracknet_test_SimGrasp.yml --num_worker 0
  • Our full pipeline.

      CUDA_VISIBLE_DEVICES=0 python network/test.py --config objopt_test_HO3D.yml --num_worker 0 --save # 1. track object and save results
      CUDA_VISIBLE_DEVICES=0 python network/test.py --config handopt_test_HO3D.yml --num_worker 0 # 2. track hand using saved object pose

Citation

If you find our work useful in your research, please consider citing:

@article{chen2022tracking,
  title={Tracking and Reconstructing Hand Object Interactions from Point Cloud Sequences in the Wild},
  author={Chen, Jiayi and Yan, Mi and Zhang, Jiazhao and Xu, Yinzhen and Li, Xiaolong and Weng, Yijia and Yi, Li and Song, Shuran and Wang, He},
  journal={arXiv preprint arXiv:2209.12009},
  year={2022}
}

License

This work and the dataset are licensed under CC BY-NC 4.0.

CC BY-NC 4.0

About

[AAAI 2023] Tracking and Reconstructing Hand Object Interactions from Point Cloud Sequences in the Wild

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published