Code for MICCAI 2023 Oral paper EndoSurf: Neural Surface Reconstruction of Deformable Tissues with Stereo Endoscope Videos by Ruyi Zha, Xuelian Cheng, Hongdong Li, Mehrtash Harandi and Zongyuan Ge.
EndoSurf is a neural-field-based method that reconstructs the deforming surgical sites with stereo endoscope videos.
EndoSurf (Ours) | EndoNeRF (baseline) |
---|---|
We recommend using Miniconda to set up the environment.
# Create conda environment
conda create -n endosurf python=3.9
conda activate endosurf
# Install packages
pip install torch==1.11.0+cu113 torchvision==0.12.0+cu113 --extra-index-url https://download.pytorch.org/whl/cu113
pip install -r requirements.txt
- Follow this instruction to prepare ENDONERF dataset.
- Follow this instruction to prepare SCARED2019 dataset.
- Follow this instruction to download checkpoints.
Use src/trainer/trainer_endosurf.py
for training. You can find all configurations in configs/endosurf
and training commands in scripts.sh
.
# Train EndoSurf on pulling_soft_tissues
CUDA_VISIBLE_DEVICES=0 python src/trainer/trainer_endosurf.py --cfg configs/endosurf/baseline/base_pull.yml --mode train
# Train EndoSurf on cutting_tissues_twice
CUDA_VISIBLE_DEVICES=0 python src/trainer/trainer_endosurf.py --cfg configs/endosurf/baseline/base_cut.yml --mode train
# Train EndoSurf on scared2019_dataset_1_keyframe_1
CUDA_VISIBLE_DEVICES=0 python src/trainer/trainer_endosurf.py --cfg configs/endosurf/baseline/base_d1k1.yml --mode train
# Train EndoSurf on scared2019_dataset_2_keyframe_1
CUDA_VISIBLE_DEVICES=0 python src/trainer/trainer_endosurf.py --cfg configs/endosurf/baseline/base_d2k1.yml --mode train
# Train EndoSurf on scared2019_dataset_3_keyframe_1
CUDA_VISIBLE_DEVICES=0 python src/trainer/trainer_endosurf.py --cfg configs/endosurf/baseline/base_d3k1.yml --mode train
# Train EndoSurf on scared2019_dataset_6_keyframe_1
CUDA_VISIBLE_DEVICES=0 python src/trainer/trainer_endosurf.py --cfg configs/endosurf/baseline/base_d6k1.yml --mode train
# Train EndoSurf on scared2019_dataset_7_keyframe_1
CUDA_VISIBLE_DEVICES=0 python src/trainer/trainer_endosurf.py --cfg configs/endosurf/baseline/base_d7k1.yml --mode train
After training, run src/trainer/trainer_endosurf.py
with test
mode to evaluate reconstruction results on the test set. You can test 2D images with --mode test_2d
, 3D meshes with --mode test_3d
, or both results with --mode test
. Example of testing EndoSurf on case pulling_soft_tissues
is:
# Evaluate all results
CUDA_VISIBLE_DEVICES=0 python src/trainer/trainer_endosurf.py \
--cfg configs/endosurf/baseline/base_pull.yml --mode test
# Evaluate 2D images only
CUDA_VISIBLE_DEVICES=0 python src/trainer/trainer_endosurf.py \
--cfg configs/endosurf/baseline/base_pull.yml --mode test_2d
# Evaluate 3D meshes only
CUDA_VISIBLE_DEVICES=0 python src/trainer/trainer_endosurf.py \
--cfg configs/endosurf/baseline/base_pull.yml --mode test_3d
You can also generate reconstruction results of all video frames with --mode demo
.
# Demonstrate all results
CUDA_VISIBLE_DEVICES=0 python src/trainer/trainer_endosurf.py \
--cfg configs/endosurf/baseline/base_pull.yml --mode demo
# Demonstrate 2D images only
CUDA_VISIBLE_DEVICES=0 python src/trainer/trainer_endosurf.py \
--cfg configs/endosurf/baseline/base_pull.yml --mode demo_2d
# Demonstrate 3D meshes only
CUDA_VISIBLE_DEVICES=0 python src/trainer/trainer_endosurf.py \
--cfg configs/endosurf/baseline/base_pull.yml --mode demo_3d
To render 2D images and 3D meshes of all frames (e.g. GIFs in demo), use mode --mode demo
.
# Render both images and meshes
CUDA_VISIBLE_DEVICES=0 python src/trainer/trainer_endosurf.py \
--cfg configs/endosurf/baseline/base_pull.yml --mode demo
# Render 2D images only
CUDA_VISIBLE_DEVICES=0 python src/trainer/trainer_endosurf.py \
--cfg configs/endosurf/baseline/base_pull.yml --mode demo_2d
# Render 3D meshes only
CUDA_VISIBLE_DEVICES=0 python src/trainer/trainer_endosurf.py \
--cfg configs/endosurf/baseline/base_pull.yml --mode demo_3d
To reproduce all results shown in the paper, first download data information files *.pkl
from here and replace the previous files in data/data_info
. This is because preprocess.py
involves some random operations e.g., point cloud noise removal. Then download the pretrained models from here. You can find all training/test/demo commands for our method, baseline methods and ablation study from scripts.sh
.
For any queries, please contact ruyi.zha@anu.edu.au.