Is your feature request related to a problem? Please describe.
For training synthetic data from mitsuba scenes, it may be useful to take a description of a Mitsuba scene with the camera parameters and parse it into a compatible format for Nerf Studio.
Describe the solution you'd like
An additional preprocessing step that takes a folder of outputs from mitsuba along with their XML (or possibly JSON) scene descriptors that searches for the camera parameters and outputs the image layout expected by Nerfstudio
Describe alternatives you've considered
I can implement this myself when I have time.
Maybe it would be close to the Metashape preprocessing?
Additional context
This would be useful for testing synthetic data from Mitsuba (primarily for evaluation of methods) under different lighting conditions, geometries, etc.
Is your feature request related to a problem? Please describe.
For training synthetic data from mitsuba scenes, it may be useful to take a description of a Mitsuba scene with the camera parameters and parse it into a compatible format for Nerf Studio.
Describe the solution you'd like
An additional preprocessing step that takes a folder of outputs from mitsuba along with their XML (or possibly JSON) scene descriptors that searches for the camera parameters and outputs the image layout expected by Nerfstudio
Describe alternatives you've considered
I can implement this myself when I have time.
Maybe it would be close to the Metashape preprocessing?
Additional context
This would be useful for testing synthetic data from Mitsuba (primarily for evaluation of methods) under different lighting conditions, geometries, etc.