-
Notifications
You must be signed in to change notification settings - Fork 174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Broken Offline NeRFCapture #59
Comments
Hi, Thanks for trying out our code! As suggested by this comment, the offline mode in NeRFCapture is broken and tends to give erroneous depth maps: #7 (comment). I believe that this is the problem you are currently facing. We don't have an alternative for the NeRFCapture setup yet. We hope to release an updated variant of the demo soon. Another possibility is to use apps such as Record3D (however, the input format might have to be looked into). |
I try to fix the offlineMode of NerfCapture. (https://github.com/Zhangyangrui916/NeRFCapture)
|
For who are still struggling for offline data, pls use https://spectacularai.github.io/docs/sdk/tools/nerf.html instead of Nerfcapture. |
Thanks for sharing this, looks very cool! We will update our README to reflect the same. |
Hi, i also noticed the nerfcapture offline is broken. I tested the spectacular app with nerfstudio and it works. |
Thanks for sharing your paper and codes first, they look really cool.
My approach
I tried to run the codes on my own computer, but I don't know anything about cyclonedds at all. So I can only try to run splatam by manually putting the rgb, depth and transforms.json file captured by NeRFCapture on my computer. Here is what I tried to do:
<SplaTAM>/experiments/iPhone_Captures/offline_demo
.<num>.depth.png
to<num>.png
offline_demo
folder and run this to change the images name in json.Question
The code runs without any problems, but unfortunately the results very poorly (even though I have GT Poses for Tracking turned on).
Without turning on GT, the generated Gaussian sphere clusters are all squished together, and both Ground Truth Depth and Rasterized Depth within the eval show 0. I'm wondering what I'm doing wrong? One possibility I can think of is that I'm getting the wrong depth map because the same code behaves fine when running replica data.
Any suggestions on adjusting the NeRFCapture depth map please? Or any way to get depth from images/videos.
Appreciate it.
The text was updated successfully, but these errors were encountered: