CoherentGS: Sparse Novel View Synthesis with Coherent 3D Gaussians
Avinash Paliwal, Wei Ye, Jinhui Xiong, Dmytro Kotovenko, Rakesh Ranjan, Vikas Chandra, Nima Khademi Kalantari
ECCV 2024
You can setup the anaconda environment using:
conda env create --file environment.yml
conda activate coherentgs
CUDA 11.7 is strongly recommended.
You can download the dataset ...
Training on LLFF dataset with 3 views. You can choose from [2, 3, 4]
views
python train.py --source_path path/nerf_llff_data/flower --eval --model_path output/flower --num_cameras 3
Run the following script to render the video.
python renderpath.py -source_path path/nerf_llff_data/flower --eval --model_path output/flower
The repo is built on top of 3D Gaussian Splatting
The modified rasterizer to render depth and eval script is from FSGS
If you find our work useful for your project, please consider citing the following paper.
@inproceedings{paliwal2024coherentgs,
title={Coherentgs: Sparse novel view synthesis with coherent 3d gaussians},
author={Paliwal, Avinash and Ye, Wei and Xiong, Jinhui and Kotovenko, Dmytro and Ranjan, Rakesh and Chandra, Vikas and Kalantari, Nima Khademi},
booktitle={European Conference on Computer Vision},
pages={19--37},
year={2024},
organization={Springer}
}