Skip to content

This repo contains multiple ways to load 3d models in nvisii and render views around it/them.

License

Notifications You must be signed in to change notification settings

TontonTremblay/nvisii_mvs

Repository files navigation

NVISII Multi-View Synthetiser

renders_example

This repo is the skeleton code used to generate different dataset, RTMV, watch it move, graspnerf, etc.

Installation

From there do the following,

pip install -r requirements.txt

This code base needs a special version of NViSII, which is downloaded and installed by the previous step, but you can always download the wheel manually here. This updated NViSII mainly add support to render background as an alpha mask when exporting png files.

Rendering scenes

The RTMV datasets has 4 types of environment. These can be recreated by using the different configs, e.g., configs/abc.md,configs/abo.md,configs/bricks.md, and configs/gscanned.md. Please note that this repo does not have any downloadable content (like 3d assets), links a provided for you to download the data below. But we do provide some minimal content to run the following examples, these are in the same file format as the original content used. Please download the data, sh download_sample_content.sh.

On top of the RTMV like dataset you can generate, we also offer a config to render a 360 view of a model. You are also welcome to generate your own config file as the scene config driven feel free to mix things up.

360 view of an object or scene

python render.py --config configs/three_sixty_view.yaml

The script simply forces the renderer to create camera positions that are on a circle, you control the angle through these variables:

# if you do not want a full circle please change that. 
camera_theta_range: [0,360]
# The middle of this interval is what is going to be render. 
camera_elevation_range: [40,90]
# This controls how far the camera is.
camera_fixed_distance_factor: 1

You can make a video from this output, here is an example.

ffmpeg -framerate 30 -pattern_type glob -i 'output/360_views/*.png' -c:v libx264 -pix_fmt yuv420p three_sixty.mp4

Falling object scene

renders_example

The RMTV dataset is mainly composed of falling object scenes (minus Brick scenes). The idea is to leverage PyBullet to create collision meshes for the meshes and then let them fall onto a plane. For this example, we are going to use some USD models, 2d textures for the planes and HDRI maps to illuminate the scene (make sure you downloaded the sample content).

python render.py --config configs/falling_usd.yaml 

In the yaml file there are different controls you can use to generate different dataset. For example, if you have the google scanned dataset, you can change the model_source to google_scanned. If you want to generate a similar scene as the RTMV google scanned, please make add_dome_hdri: False and table_color: False. You can play around how the data is generated by changing different values. You might have to change the scale of the cameras, please pay attention to scene distance: output on the console.

Config file

  • compute_visibility (Bool): Computes the visibility in percentage of the object 1.0 is fully visibile, 0 is not visibile. This can costly to compute if there are a lot of objects since it needs to do a n2 comparisons.

Citation

If you use this code in your research please cite the following:

@misc{morrical2021nvisii,
      title={NViSII: A Scriptable Tool for Photorealistic Image Generation}, 
      author={Nathan Morrical and Jonathan Tremblay and Yunzhi Lin and Stephen Tyree and Stan Birchfield and Valerio Pascucci and Ingo Wald},
      year={2021},
      eprint={2105.13962},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

About

This repo contains multiple ways to load 3d models in nvisii and render views around it/them.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published