You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This solution extract depth data for photos as part of the pipeline (plus uncertainties?). Is there a way of providing RGBD data directly from depth cameras?
The text was updated successfully, but these errors were encountered:
Hi, sure, it is also possible to optimize NeRF with the depth loss and depth-guided sampling from RGBD data.
F.ex. in the code what we load as ground truth depth (gt_depths, gt_valid_depthshere) is the sensor depth from ScanNet. You could use this depth data, instead of running depth completion on the sparse depth from SfM. When assigning the train depth maps: depths[i_train, :, :, 0] is the depth in meters and depths[i_train, :, :, 1] is the standard deviation in meters. The standard deviation can be set according to the accuracy of the depth sensor.
When using a different source of depth input you probably need to find a suitable depth loss weight.
This solution extract depth data for photos as part of the pipeline (plus uncertainties?). Is there a way of providing RGBD data directly from depth cameras?
The text was updated successfully, but these errors were encountered: