The goal of this repository is twofold:
-
Neatly organize multiple, static view video files of a single, dynamic scene into standard dataset structures (DyNeRF, Google Immersive, etc) to allow for easier research and development into novel training approaches of dynamic volumetric scenes.
-
Create an end-to-end training and rendering pipeline for novel view creation on cloud GPU's
[x] Portable development environment
[x] Multi-threaded frame extraction utility for DyNeRF
[x] Colmap pipeline for DyNeRF
[x] CLI
[x] poses_bounds.npy utility for custom dataset
[ ] Cloud deployment for spiral render
[ ] Novel view trajectory template
[ ] Extend frame extraction to include Google Immersive and others
-
Training and rendering scripts from NVlabs' QUEEN
-
poses_bounds.npy script from LLFF
-
Colmap workflows from 4D Gaussians