You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you for sharing your great research.
I have 9 images and the cameras are all facing the same direction. I want to get 9 depth maps.
For the DTU dataset, depth estimation works fine with your network.
But when I input my image, I get the following bad result.
I don't know if it has anything to do with it, but the camera coordinate system I used is right, down, forward. And with depth_min = 1.25 I set depth_inverval=0.1.
In the case of num_view, it is set to 9 so that all images are always used.
Do you have any guesses why?
The text was updated successfully, but these errors were encountered:
Oh, I made a mistake. There was a part where the intrinsics matrix was divided by 4, but I omitted it. I solved it, and the result is good to some extent.
@zhang-snowy, if you use our model for your custom dataset. We recommend you read the data processing files (dtu_yao.py, blended_dataset.py) as well as the structure of datasets in detail. and then prepare your data in the same way
First of all, thank you for sharing your great research.
I have 9 images and the cameras are all facing the same direction. I want to get 9 depth maps.
For the DTU dataset, depth estimation works fine with your network.
But when I input my image, I get the following bad result.
I don't know if it has anything to do with it, but the camera coordinate system I used is right, down, forward. And with depth_min = 1.25 I set depth_inverval=0.1.
In the case of num_view, it is set to 9 so that all images are always used.
Do you have any guesses why?
The text was updated successfully, but these errors were encountered: