Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a way of using RGBD images as the input? #13

Open
MaxPalmer-UH opened this issue Jun 23, 2022 · 2 comments
Open

Is there a way of using RGBD images as the input? #13

MaxPalmer-UH opened this issue Jun 23, 2022 · 2 comments

Comments

@MaxPalmer-UH
Copy link

This solution extract depth data for photos as part of the pipeline (plus uncertainties?). Is there a way of providing RGBD data directly from depth cameras?

@barbararoessle
Copy link
Owner

Hi, sure, it is also possible to optimize NeRF with the depth loss and depth-guided sampling from RGBD data.
F.ex. in the code what we load as ground truth depth (gt_depths, gt_valid_depths here) is the sensor depth from ScanNet. You could use this depth data, instead of running depth completion on the sparse depth from SfM. When assigning the train depth maps: depths[i_train, :, :, 0] is the depth in meters and depths[i_train, :, :, 1] is the standard deviation in meters. The standard deviation can be set according to the accuracy of the depth sensor.
When using a different source of depth input you probably need to find a suitable depth loss weight.

@liang3588
Copy link

If I have the dense depth maps already, how to compute the standard deviation? Can it be calculated by the depth of the surrounding pixels?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants