We follow the Breaking Bad Dataset for data pre-processing. For more information about data processing, please refer to the dataset website.
After processing the data, ensure that you have a folder named data
with the following structure:
data
├── breaking_bad
│ ├── everyday
│ │ ├── BeerBottle
│ │ │ ├── ...
│ │ ├── ...
│ ├── everyday.train.txt
│ ├── everyday.val.txt
│ └── ...
└── ...
Only the everyday
subset is necessary.
In the orginal benchmark code of Breaking Bad dataset, it needs sample point cloud from mesh in each batch which is time-consuming. We pre-processing the mesh data and generate its point cloud data and its attribute.
cd puzzlefusion-plusplus/
python generate_pc_data +data save_pc_data_path=data/pc_data/everyday/
We also provide the pre-processed data in here.
You can download the verifier data from here.
You can download the matching data from here.
The verifier data and matching data need to generate the data from Jigsaw. Since this process is quite complex, we will upload the processed data for now. More details on how to obtain this processed data will be provided later.
We provide the checkpoints at this link. Please download and place them as ./work_dirs/ then unzip.
Finally, the overall data structure should looks like:
puzzlefusion-plusplus/
├── data
│ ├── pc_data
│ ├── verifier_data
│ ├── matching_data
└── ...
├── output
│ ├── autoencoder
│ ├── denoiser
│ ├── ...
└── ...