Wesley Khademi, Li Fuxin
Oregon State University
teaser_video.mp4
Abstract: Recent point-based object completion methods have demonstrated the ability to accurately recover the missing geometry of partially observed objects. However, these approaches are not well-suited for completing objects within a scene as they do not consider known scene constraints (e.g., other observed surfaces) in their completions and further expect the partial input to be in a canonical coordinate system which does not hold for objects within scenes. While instance scene completion methods have been proposed for completing objects within a scene, they lag behind point-based object completion methods in terms of object completion quality and still do not consider known scene constraints during completion. To overcome these limitations, we propose a point cloud based instance completion model that can robustly complete objects at arbitrary scales and pose in the scene. To enable reasoning at the scene level, we introduce a sparse set of scene constraints represented as point clouds and integrate them into our completion model via a cross-attention mechanism. To evaluate the instance scene completion task on indoor scenes, we further build a new synthetic dataset called ScanWCF, which contains labeled partial scans as well as aligned ground truth scene completions that are watertight and collision free. Through several experiments, we demonstrate that our method achieves improved fidelity to partial scans, higher completion quality, and greater plausibility over existing state-of-the-art methods.
Please follow the installation directions below to match the configuration we use for training/testing.
The required dependencies can be installed via conda and pip by running:
# Create conda environment
conda env create -f environment.yml
conda activate pbic
# Install hydra for managing configs
pip install hydra-core --upgrade
# Install NKSR
pip install nksr -f https://nksr.huangjh.tech/whl/torch-2.0.0+cu118.html
# Install PointNet++ ops
pip install libs/pointnet2_ops_lib/.We do not provide a download of our preprocessed ShapeNetV2 meshes used for pre-training our object completion model. Instead we provide scripts and directions for preprocessing the ShapeNetV2 dataset to be used for pre-training, which can be found here.
ScanWCF is a dataset for the instance scene completion task based on room layouts from ScanNet.
The ScanWCF dataset contains:
- Instance segmented partial point cloud scans
- Scene background meshes
- Scene annotation files for placing ShapeNet meshes (GT object instances) into scenes
- Free/occluded space constraints used by our model
You can request access to the ScanWCF dataset by completing the following form: ScanWCF Terms of Use
Upon gaining access and downloading the dataset, place the downloaded data under the data directory to be used for training/testing. If done correctly, the directory structure should look like:
- point_based_instance_completion
- data
- ScanWCF
- json_files
- scenes
...
Due to licensing, we do not provide the watertight ShapeNet meshes in our dataset. Please refer to here for instructions on processing ShapeNet meshes to be used in our dataset.
If you want to train your own version of the scene completion model, you can download the weights of our object completion model pre-trained on ShapeNet to initialize the scene completion model from:
If you just want to run evaluation, you can directly download the weights of our scene completion model trained on our ScanWCF dataset:
To use downloaded model checkpoints, place the downloaded experiment directory under the experiments directory. By default, our config files point to the experiments of the provided model checkpoints (e.g., see experiment_name in configs/test.yaml or pretrained_path in configs/train.yaml).
To pre-train our object completion model on the watertight ShapeNet meshes, run:
python main/runner.py --config-name pretrainWe initialize our scene completion model from our pre-trained object completion model. To do so, you can provide the path to a saved model checkpoint in the pretrained_path parameter of the configs/train.yaml file. The saved model checkpoint can be generated from pre-training the object completion model yourself using the command above or by downloading our provided Object Completion Model Checkpoint and placing the experiment under the experiments directory. Then to train our scene completion model on our ScanWCF dataset, run:
python main/runner.py --config-name trainMask3D training and testing is currently not implemented in this repo. In the meantime, we provide a download link to our Mask3D instance segmentation predictions that we used for evaluation: Mask3D Predictions. The downloaded directory can be placed directly under the data directory to be used for evaluating our scene completion model.
To run the scene completion model using the Mask3D instance predictions of partial scans use:
python main/runner.py --config-name test_mask3dTo run the scene completion model using the ground truth instance segmentations of partial scans use:
python main/runner.py --config-name testWe provide a script for visualizing predicted completions, predicted surface normals, and reconstructed meshes. To visualize predictions, run:
# visualize all scenes using Mask3D instance segmentations
python main/visualize.py --data_dir ./data/ScanWCF --pred_dir ./experiments/{experiment name}/results_Mask3D/completions
# visualize all scenes using ground truth instance segmentations
python main/visualize.py --data_dir ./data/ScanWCF --pred_dir ./experiments/{experiment name}/results_ScanWCF/completions
# visualize a specific scene
python main/visualize.py --data_dir ./data/ScanWCF --pred_dir ./experiments/{experiment name}/{results_Mask3D or results_ScanWCF}/completions --scene_id {scene id}_{partial id}Use the following commands to run instance scene completion metrics:
# chamfer distance (CD)
python evaluation/cd/eval.py --data_dir ./data/ScanWCF --pred_dir ./experiments/{experiment name}/results_Mask3D/reconstructions
# intersection of union (IoU)
python evaluation/iou/eval.py --data_dir ./data/ScanWCF --pred_dir ./experiments/{experiment name}/results_Mask3D/reconstructions
# light field distance (LFD)
python evaluation/lfd/eval.py --data_dir ./data/ScanWCF --pred_dir ./experiments/{experiment name}/results_Mask3D/reconstructions
# Point Coverage Ratio (PCR)
python evaluation/pcr/eval.py --data_dir ./data/ScanWCF --pred_dir ./experiments/{experiment name}/results_Mask3D/reconstructionsUse the following command to run scene completion metrics (i.e., completion results when ground truth instance segmentation is provided):
python evaluation/scene_completion_evaluation.py --data_dir ./data/ScanWCF --pred_dir ./experiments/{experiment name}/results_ScanWCF/completions@inproceedings{khademipoint,
title={Point-based Instance Completion with Scene Constraints},
author={Khademi, Wesley and Li, Fuxin},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025}
}
Some parts of the code rely on other works. We thank the authors for their work:
- NKSR
- Mask3D
- SeedFormer
- DIMR (Evaluation scripts only. These scripts are covered under Apache 2.0 license.)
- PCN