Skip to content

Latest commit

 

History

History
149 lines (108 loc) · 5.71 KB

README.md

File metadata and controls

149 lines (108 loc) · 5.71 KB

Patch-based 3D Natural Scene Generation from a Single Example (CVPR 2023)

⭐ Generating (and Editing) diverse 3D natural scenes from a single example without any training.

High-quality 3D scenes created by our method (background sky post-added)

Prerequisite

Setup environment

😃 We also provide a Dockerfile for easy installation, see Setup using Docker.

Clone this repository.

git clone git@github.com:wyysf-98/Sin3DGen.git

Install the required packages.

conda create -n Sin3DGen python=3.8
conda activate Sin3DGen
conda install -c pytorch pytorch=1.9.1 torchvision=0.10.1 cudatoolkit=10.2 && \
conda install -c bottler nvidiacub && \
pip install -r docker/requirements.txt
Data preparation

We provide some Plenoxels scenes and optimized mapping fields in link for a quick test. Please download and unzip to current folder. Then the folder should as following:

└── data
    └── DevilsTower
        ├── mapping_fields
        |   ├── ...
        |   └── sxxxxxx.npz     # Synthesized mapping fields
        └── ckpts
            ├── rgb_fps8.mp4    # Visualization of the scene
            ├── ckpt_reso.npz   # Plenoxels saving files
            └── mesh_reso.obj   # Extracted meshes
Use your own data*

Please refer to svox2 to prepare your own data. You can also use blender to render scenes as in NSVF.

* Note that all scenes must be inside a unit box centered at the origin, as mentioned in the paper.

Then you should get your scenes using our forked version Link.

The main differences of the original version are:

  • We made modifications to certain parts of opt.py to enable the preservation of intermediate checkpoint during the training process.
  • Add more stages during training in configuration.
git clone git@github.com:wyysf-98/svox2.git
cd svox2
./launch.sh {yout_data_name} 0 {yout_data_path} -c configs/syn_start_from_12.json

Quick inference demo

For local quick inference demo using optimized mapping field, you can use

python quick_inference_demo.py -m 'run' \
      --config './configs/default.yaml' \
      --exemplar './data/DevilsTower/ckpts' \
      --resume './data/DevilsTower/mapping_fields/s566239.npz' \
      --output './outputs/quick_inference_demo/DevilsTower_s566239' \
      --scene_reso '[512, 512, 512]' # resolution for visualization, change to '[384, 384, 384]' or lower when OOM

Optimization

We provide a colab for a demo

We use a NVIDIA Tesla V100 with 32 GB Ram to generate the novel scenes, which takes about 10 mins as mentioned in our paper.

python generate.py -m 'run' \
      --config './configs/default.yaml' \
      --exemplar './data/DevilsTower/ckpts' \

if you encounter OOM problem, try to reduce pyr_reso for synthesis by adding --pyr_reso [ 16, 21, 28, 38, 51, 68, 91] or the scene_reso for visualization by adding --scene_reso [216, 216, 216]. For more configurations, please refer to the comments in the configs/default.yaml.

Evaluation

We provide the relevant code for evaluating the metrics (SIFID, SIMMD, image_diversity, scene_diversity), please change the evaluation script based on your actual situation.

cd evaluation
python compute_metrics.py --exp {out_path} \
                          --img_gt {GT_images_path} \
                          --mesh_gt {GT_mesh_path} \
                          --out_dir ./results/{exp_name}

Acknowledgement

The implementation of exact_search.py and evaluation for images partly took reference from Efficient-GPNN. We thank the authors for their generosity to release code.

Citation

If you find our work useful for your research, please consider citing using the following BibTeX entry.

@article{weiyu23Sin3DGen,
    author    = {Weiyu Li and Xuelin Chen and Jue Wang and Baoquan Chen},
    title     = {Patch-based 3D Natural Scene Generation from a Single Example},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    year      = {2023},
}