-
-
Notifications
You must be signed in to change notification settings - Fork 910
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems trying to use depthmaps from a LIDAR sensor for DenseReconstruction #1055
Comments
@adricostas the code is fine, all you need to do is to match the resolution of the depth-maps when you call |
Thanks for the reply. Unfortunately, even changing the max_resolution parameter the densification doesn't work. Here are the values that I'm using:
Any idea? |
I could have ideas if I know what is not working, an error message, assert, line, etc |
The same as at the initial stage:
And this is because depthData.normalMap is empty when this code is executed:
|
first of all why do you even run EstimateDepthMap if you have the depth-maps? do you want to refine them? |
to fuse them as they are add these params |
No, I don't want to refine them, I only want to take advantage of the depthmaps provided by a LIDAR to carry out the reconstruction. In other issues I read that I should convert the depthmaps that I have to .dmap files and that's all what I have done so far. I don't know if I should modify any of the parameters that I use to call the DenseReconstruction funtion regarding the ones that I used to perform the DenseReconstruction when I don't have those depthmaps. (listed in one of my previous comments). So far, I have only changed the value of the max_resolution and min_resolution based on your previous comment. I don't know if I must change more parameters in this case.
Ok, I will try this! Thanks |
Now it is working! Thank you very much for your support! |
Yes, there is, run it as you tried at first, but your need to debug why the
normals are not estimated when the depth map is loaded, EstimateNormals
should be called
…On Mon, 18 Sep 2023 at 17:54 Adriana Costas ***@***.***> wrote:
Now it is working! Thank you very much for your support!
Just out of curiosity, since the reconstruction that I'm getting using the
LIDAR depthmaps is a little bit worse than the one returned when the
depthmaps are estimated, is there a way of refining the LIDAR depthmaps as
you mentioned in your previous comment?
—
Reply to this email directly, view it on GitHub
<#1055 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAVMH3WCTJ6ZKQLWRO4JML3X3BOCHANCNFSM6AAAAAA4P6TBSI>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Ok, thanks, your help is really appreciated. And what about using confidence maps together with the depthmaps provided by ARKit? I can get a confidence map with values [0,1,2] that indicate low, medium and high confidence respectively. Is this range compatible with the range of the confidence maps generated by OpenMVS? In case of including the confidence maps into the .dmap files, will they be used for the reconstruction if I only fuse the depthmaps as they are (without refinement)? |
yes, can be used, but you need to convert them in a [0,1] range, ex: 0.5, 0.7, 0.9 |
@adricostas Hi, you mentioned that you are using custom triangulation and bundle adjustment code to extract the sparse pointcloud from the data obtained through ARKit. I have been trying to do the same. Could you help me with this? Currently I am using poses from camera.transform, but when I try to visualize them, they are inaccurate. I would appretiate if you could share how you computed/optimized the poses obtained from ARKit. |
Hi @bkhanal-11 , Yes, I'm using the poses provided by camera.transform and they are not very accurate. In order to optimize them I'm applying a bundle adjustment after the triangulation, allowing the optimization of the 3D points, poses and focal length. |
Could you share the resources? I am trying to do the same thing for few weeks for my academic project and not getting any result. I would appreciate if you can do that. |
You can also use Polycam app which records the ARKit data (RGBD images) and there is an importer already in OpenMVS for Polycam projects |
there is nothing wrong with meshing, the problem you see is that the input point cloud does not represent accurately the scene, the monitors are larger in reality then the point cloud represents them, so the side of them project on the existing 3D data |
Hello again, In a previous comment you said that the confidence values must be in a [0,1] range. However, if I load the confidence maps saved into the .dmap files after executing the densification with depthmap estimation I can see that the values are not in that range. For example: I think that these non-normalized values come from the function FilterDepthMaps in which confidence values are added but they are not normalized again. Are these values ok?
This value should be newConfMap(xRef) = (posConf - negConf)/((nPosViews + nNegViews + 1) instead of newConfMap(xRef) = (posConf - negConf)? Thanks in advance! |
Hello,
I have a software that is able to create a 3D reconstruction using images captured with a phone and their poses provided by ArKit . I'm using my own triangulation and bundle adjustment code to extract the sparse pointcloud and then OpenMVS to densify it and creating the textured mesh. The reconstruction is working perfectly, but now I'm trying to use directly the depthmaps provided by the LIDAR in order to avoid the estimation step in the OpenMVS pipeline and I'm not able to do it.
As far as I read in other reported issues, I have to create the .dmap files from the LIDAR's depthmaps. To do that, I can use
MVS::HeaderDepthDataRaw
and take the methodMVS::ExportDepthDataRaw
as an example. So, after creating myMVS::Scene
and before callingDenseReconstruction
I'm doing the following:In this case the resolution of the images is 1280x960 and the resolution of the depthmaps is 256x192 but I'm indicating the last one as the resolution for both in the header because of this : "depth-map-resolution, for now only the same resolution as the image is supported". Is that ok? I guess that this shouldn't be important as depthmaps are not estimated from the images in this case. On the other hand
sfmScene->frames[i].depthMap
is a cv::Mat(192,256,CV_32F) and therefore, I can assign it directly to a MVS::DepthMap.The execution after including this piece of code breaks returning a segmentation fault. This segmentation fault occurs inside the MVS::PatchMatchCUDA::EstimateDepthMap function, and it seems that is due to the fact that NormalMap is empty.
Is it necessary to add information about the normals to the dmap file? I supossed that were not necessary because I'm no indicating that I have this information with
HeaderDepthDataRaw::HAS_NORMAL
. Another thing that should be pointed out is that, at the time that I'm creating the dmap files the images have not got any neighbors defined. Could it be a problem? These neighbors are only used to estimate the depthmaps, right?I would really appreciate your help on this. I don't know what I'm doing wrong.
Thank you in advance!
The text was updated successfully, but these errors were encountered: