Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

It doesn't work well for my custom datasets. #9

Closed
bring728 opened this issue Apr 20, 2022 · 3 comments
Closed

It doesn't work well for my custom datasets. #9

bring728 opened this issue Apr 20, 2022 · 3 comments

Comments

@bring728
Copy link

First of all, thank you for sharing your great research.

I have 9 images and the cameras are all facing the same direction. I want to get 9 depth maps.
For the DTU dataset, depth estimation works fine with your network.
But when I input my image, I get the following bad result.
스크린샷, 2022-04-20 19-28-04
스크린샷, 2022-04-20 19-27-53
스크린샷, 2022-04-20 19-27-35

I don't know if it has anything to do with it, but the camera coordinate system I used is right, down, forward. And with depth_min = 1.25 I set depth_inverval=0.1.
In the case of num_view, it is set to 9 so that all images are always used.

Do you have any guesses why?

@bring728
Copy link
Author

Oh, I made a mistake. There was a part where the intrinsics matrix was divided by 4, but I omitted it. I solved it, and the result is good to some extent.

@Sun-Xinnnnn
Copy link

How did you solve this problem? I had a similar problem......

@TruongKhang
Copy link
Owner

@zhang-snowy, if you use our model for your custom dataset. We recommend you read the data processing files (dtu_yao.py, blended_dataset.py) as well as the structure of datasets in detail. and then prepare your data in the same way

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants