Skip to content

[ECCV2022] Optimization over Disentangled Encoding: Unsupervised Cross-Domain Point Cloud Completion via Occlusion Factor Manipulation

License

Notifications You must be signed in to change notification settings

azuki-miho/OptDE

Repository files navigation

Optimization over Disentangled Encoding: Unsupervised Cross-Domain Point Cloud Completion via Occlusion Factor Manipulation

by Jingyu Gong*, Fengqi Liu*, Jiachen Xu, Min Wang, Xin Tan, Zhizhong Zhang, Ran Yi, Haichuan Song, Yuan Xie, Lizhuang Ma. (*=equal contribution)

Introduction

This project is based on our ECCV2022 paper.

@inproceedings{gong2022optde,
    title={Optimization over Disentangled Encoding: Unsupervised Cross-Domain Point Cloud Completion via Occlusion Factor Manipulation},
    author={Gong, Jingyu and Liu, Fengqi and Xu, Jiachen and Wang, Min and Tan, Xin and Zhang, Zhizhong and Yi, Ran and Song, Haichuan and Xie, Yuan and Ma, Lizhuang},
    booktitle={European Conference on Computer Vision (ECCV)},
    year={2022}
}

Installation

Please follow the instruction to set up your own environment.

git clone git@github.com:azuki-miho/OptDE.git
cd OptDE
mkvirtualenv optde
workon optde
pip install -r requirements.txt

Dataset

We conduct our experiments on 3D-FUTURE, ModelNet, ScanNet, MatterPort3D and KITTI. We obtain the models from 3D-FUTURE, ModelNet40 and modify the virtual rendering code in PCN to generate the partial and complete point clouds which is available here with password: 542h. We obtain the partial scans of ScanNet, MatterPort3D and KITTI from pcl2pcl, please download them and put them in ./datasets/data. We take CRN as our source domain and obtain the partial and completes shapes from CRN dataset.

Usage

Preparation

We also utilize the discrimination loss like ShapeInversion in our baseline, so please download the pretrained discriminator models from ShapeInversion and save them to `./pretrained_models/'. If you want to take other source domain data, you can use the code in ShapeInversion for discriminator pretraining.

Desentangled Encoding Training

For desentangled encoding training with CRN chair as source domain and 3D-FUTURE chair as target domain, run the following script:

sh run.sh 0

For other experiment setting, you can change the REALDATA, VCLASS and RCLASS variables in run.sh. If you want to change the log directory, please modify the LOGDIR in run.sh.

Optimization over Disentangled Encoding

For optimization over disentangled encoding with CRN chair as source domain and 3D-FUTURE chair as target domain, please first change the LOGDATE in run_optimizer.sh to your log file name and run the following script:

sh run_optimizer.sh 0

For other experiment setting, you can change the REALDATA, VCLASS and RCLASS variables in run_optimizer.sh. If you want to change the log directory, please modify the LOGDIR in run_optimizer.sh.

Visualization

For visualization of completion results, you should first install the Mitsuba. The code is tested on Mitsuba2. After the installation, please change to the ./render directory. Then, you need to change the PATH_TO_MITSUBA2 to exectuable mitsuba and change the LOGDATE in run_render.sh to your log file name. Now, you can run the following script:

sh run_render.sh

For other experiment setting, you can change the REALDATA, RCLASS, RESULT_NAME, FINETUNE or even LOGDIR accordingly.

Acknowledgement

This code is based on ShapeInversion, ChamferDistancePytorch, PCN, pcl2pcl and Mitsuba2PointCloudRenderer. The models used for partial and complete shape generation are from 3D-FUTURE, ModelNet. CRN and real-world point clouds are provided by CRN and pcl2pcl. If you find they are useful, please also cite them in your work.

About

[ECCV2022] Optimization over Disentangled Encoding: Unsupervised Cross-Domain Point Cloud Completion via Occlusion Factor Manipulation

Resources

License

Stars

Watchers

Forks

Packages

No packages published