This repo contains the dataset scripts and the actual dataset used in paper "LFSphereNet: Real Time Spherical Light Field Reconstruction from a Single Omnidirectional Image"
- Spherical Light Field Dataset (~14 GB)
- Real Photographic Light Field Data (~1.18 GB)
Download it as zip file from here
Supplementary Video can be accessed from here: Video File
The scripts in Blender_Scripts
can be used to generate Light Field dataset with any X,Y,Z grid size. For example 7x1x7.
The following camera properties are used:-
Camera Type: Panorma
Projection: Equirectangular
Image Width: 2048px
Image Height: 1024px
Depending on the scene 3 major setup changes are required in code
SCENE_KEY for example: '_mainScene'
CAMERA_NAME for example: 'Camera.001'
grid_size = [(X,Y,Z)] for example: to render 7x1x7 put [(7,1,7)]
The template folder structure with following parameters
Grid size: 7x1x7
Image Width: 1024
Sampling: 1000
Scene_name/
|-Data/
|-360/
|-0SceneName/
|-w1024_s1000_PANO/
|-7_1_7/
|- 00000_000.png
|- 00001_000.png
.
.
|-00048_000.png
|-Logs/
|-360/
|- 0_logFile.txt
|- 1_logFile.txt
.
.
|- 14_logFile.txt
- Start blender app, or use terminal
blender
to launch it - Load the scene
- Open the script window
- Select open file and then select the python file which corresponds to that scene
- Modify
DEFAULT_ROOT, SCENE_NAME, SCENE_KEY, SAMPLING
according to scene which is loaded (if you add your own custom camera or scene key is different from the one which is already in script) - Run the script from blender
- After generating all scenes, use
processor.py
to generate the order of files as used inSpherical_Light_Field_Dataset
. You will need to copy each folder into a newbase_dataset
folder first.
The scene files can be found on blender's official website under demos section. URL: https://www.blender.org/download/demo-files/
- Classroom
- Lone Monk
- Barbershop
- Italian Flat
- Barcelona
- The computations were enabled by resources provided by the National Academic Infrastructure for Supercomputing in Sweden (NAISS) and the Swedish National Infrastructure for Computing (SNIC) at Alvis (https://www.c3se.chalmers.se/about/Alvis/) partially funded by the Swedish Research Council through grant agreements no. 2022-06725 and no. 2018-05973.
- The work was supported by the European Joint Doctoral Pro�gramme on Plenoptic Imaging (PLENOPTIMA) through the Eu�ropean Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 956770
The initial code for the scenes has been taken from the paper Omni-NeRF: Neural Radiance Field from 360° Image Captures.
If you plan to use the dataset please make sure to cite our paper:-
@INPROCEEDINGS{gond2023lfspherenet,
author={Gond, Manu and Zerman, Emin and Knorr, Sebastian and Sjöström, Mårten},
booktitle={ACM SIGGRAPH European Conference on Visual Media Production (CVMP)},
title={{LFSphereNet}: Real Time Spherical Light Field Reconstruction from a Single Omnidirectional Image},
year={2023}
}