Skip to content

Latest commit

 

History

History
54 lines (47 loc) · 2.14 KB

data_preparation.md

File metadata and controls

54 lines (47 loc) · 2.14 KB

Data preparation

We follow the Breaking Bad Dataset for data pre-processing. For more information about data processing, please refer to the dataset website.

After processing the data, ensure that you have a folder named data with the following structure:

data
├── breaking_bad
│   ├── everyday
│   │   ├── BeerBottle
│   │   │   ├── ...
│   │   ├── ...
│   ├── everyday.train.txt
│   ├── everyday.val.txt
│   └── ...
└── ...

Only the everyday subset is necessary.

Generate point cloud data

In the orginal benchmark code of Breaking Bad dataset, it needs sample point cloud from mesh in each batch which is time-consuming. We pre-processing the mesh data and generate its point cloud data and its attribute.

cd puzzlefusion-plusplus/
python generate_pc_data +data save_pc_data_path=data/pc_data/everyday/

We also provide the pre-processed data in here.

Verifier training data

You can download the verifier data from here.

Matching data

You can download the matching data from here.

The verifier data and matching data need to generate the data from Jigsaw. Since this process is quite complex, we will upload the processed data for now. More details on how to obtain this processed data will be provided later.

Checkpoints

We provide the checkpoints at this link. Please download and place them as ./work_dirs/ then unzip.

Structure

Finally, the overall data structure should looks like:

puzzlefusion-plusplus/
├── data
│   ├── pc_data
│   ├── verifier_data
│   ├── matching_data
└── ...
├── output
│   ├── autoencoder
│   ├── denoiser
│   ├── ...
└── ...