Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems trying to use depthmaps from a LIDAR sensor for DenseReconstruction #1055

Open
adricostas opened this issue Sep 8, 2023 · 18 comments

Comments

@adricostas
Copy link

adricostas commented Sep 8, 2023

Hello,

I have a software that is able to create a 3D reconstruction using images captured with a phone and their poses provided by ArKit . I'm using my own triangulation and bundle adjustment code to extract the sparse pointcloud and then OpenMVS to densify it and creating the textured mesh. The reconstruction is working perfectly, but now I'm trying to use directly the depthmaps provided by the LIDAR in order to avoid the estimation step in the OpenMVS pipeline and I'm not able to do it.

As far as I read in other reported issues, I have to create the .dmap files from the LIDAR's depthmaps. To do that, I can use MVS::HeaderDepthDataRaw and take the method MVS::ExportDepthDataRaw as an example. So, after creating my MVS::Scene and before calling DenseReconstruction I'm doing the following:

for (int i=0; i < mvsScene->images.size(); i++)
{
           String fileName = WORKING_FOLDER + String::FormatString("depth%04u.dmap", mvsScene->images[i].ID);
           FILE *f = fopen(fileName, "wb");
           if (f == NULL)
           {
               DEBUG("error: opening file '%s' for writing depth-data", fileName.c_str());
              // return false;
           }

           // write header
           MVS::HeaderDepthDataRaw header;
           header.name = MVS::HeaderDepthDataRaw::HeaderDepthDataRawName();
           header.type = MVS::HeaderDepthDataRaw::HAS_DEPTH;
           header.imageWidth = (uint32_t)sfmScene->frames[i].depthMap.cols;
           header.imageHeight = (uint32_t)sfmScene->frames[i].depthMap.rows;
           header.depthWidth = (uint32_t)sfmScene->frames[i].depthMap.cols;
           header.depthHeight = (uint32_t)sfmScene->frames[i].depthMap.rows;
           header.dMin = sfmScene->frames[i].minDepth;
           header.dMax = sfmScene->frames[i].maxDepth;
           fwrite(&header, sizeof(MVS::HeaderDepthDataRaw), 1, f);

           // write image file name
           const String FileName(mvsScene->images[i].name);
           const uint16_t nFileNameSize((uint16_t)FileName.length());
           fwrite(&nFileNameSize, sizeof(uint16_t), 1, f);
           fwrite(FileName.c_str(), sizeof(char), nFileNameSize, f);

           // write neighbor IDs
           MVS::IIndexArr IDs(0, mvsScene->images[i].neighbors.size()+1);
   		IDs.push_back(mvsScene->images[i].ID);
   		for (const MVS::ViewScore& neighbor: mvsScene->images[i].neighbors)
           {
               IDs.push_back(neighbor.idx.ID);
           }
   			
           const uint32_t nIDs(IDs.size());
           fwrite(&nIDs, sizeof(MVS::IIndex), 1, f);
           fwrite(IDs.data(), sizeof(MVS::IIndex), nIDs, f); 

           // write pose
           fwrite(camera.K.val, sizeof(REAL), 9, f);
           fwrite(platform.poses[i].R.val, sizeof(REAL), 9, f);
           fwrite(platform.poses[i].C.ptr(), sizeof(REAL), 3, f);

           // write depth-map
           MVS::DepthMap depthMap(sfmScene->frames[i].depthMap);
           std::cout << depthMap.rows << std::endl;
           fwrite(depthMap.getData(), sizeof(float), depthMap.area(), f);

           const bool bRet(ferror(f) == 0);
           fclose(f);
}

In this case the resolution of the images is 1280x960 and the resolution of the depthmaps is 256x192 but I'm indicating the last one as the resolution for both in the header because of this : "depth-map-resolution, for now only the same resolution as the image is supported". Is that ok? I guess that this shouldn't be important as depthmaps are not estimated from the images in this case. On the other hand sfmScene->frames[i].depthMap is a cv::Mat(192,256,CV_32F) and therefore, I can assign it directly to a MVS::DepthMap.

The execution after including this piece of code breaks returning a segmentation fault. This segmentation fault occurs inside the MVS::PatchMatchCUDA::EstimateDepthMap function, and it seems that is due to the fact that NormalMap is empty.

10:01:33 [App     ] Preparing images for dense reconstruction completed: 12 images (62ms)
10:01:33 [App     ] Reference image   0 sees 5 views:  10(159pts,1.31scl)  11(162pts,1.33scl)   1(85pts,1.07scl)   2(29pts,1.11scl)   9(25pts,1.20scl) (187 shared points)
10:01:33 [App     ] Reference image   3 sees 8 views:   8(159pts,1.18scl)   4(149pts,1.02scl)   2(125pts,0.99scl)   9(107pts,1.10scl)   7(125pts,1.31scl)   5(71pts,1.06scl)  10(42pts,1.14scl)   1(8pts,0.96scl) (288 shared points)
10:01:33 [App     ] Reference image   4 sees 7 views:   7(250pts,1.27scl)   5(188pts,1.02scl)   3(149pts,0.98scl)   8(169pts,1.13scl)   9(70pts,1.07scl)   6(35pts,1.11scl)   2(50pts,0.97scl) (355 shared points)
10:01:33 [App     ] Reference image   8 sees 9 views:   9(312pts,0.91scl)   7(197pts,1.12scl)   3(159pts,0.85scl)   4(169pts,0.89scl)   2(235pts,0.83scl)  10(136pts,0.94scl)   5(108pts,0.94scl)   6(22pts,1.01scl)   1(9pts,0.76scl) (544 shared points)
10:01:33 [App     ] Reference image   6 sees 4 views:   5(50pts,0.94scl)   7(48pts,1.10scl)   4(35pts,0.90scl)   8(22pts,0.99scl) (63 shared points)
10:01:33 [App     ] Reference image   5 sees 5 views:   4(188pts,0.98scl)   7(177pts,1.22scl)   8(108pts,1.07scl)   6(50pts,1.07scl)   3(71pts,0.95scl) (239 shared points)
10:01:33 [App     ] Reference image   1 sees 7 views:  10(243pts,1.23scl)  11(195pts,1.22scl)   2(135pts,1.03scl)   0(85pts,0.94scl)   9(94pts,1.14scl)   3(8pts,1.05scl)   8(9pts,1.31scl) (291 shared points)
10:01:33 [App     ] Reference image  10 sees 8 views:   9(283pts,0.95scl)   2(251pts,0.86scl)  11(299pts,1.00scl)   1(243pts,0.82scl)   0(159pts,0.77scl)   8(136pts,1.06scl)   3(42pts,0.88scl)   7(3pts,1.12scl) (580 shared points)
10:01:33 [App     ] Reference image   9 sees 9 views:   2(340pts,0.91scl)   8(312pts,1.10scl)  10(283pts,1.05scl)   3(107pts,0.91scl)   1(94pts,0.88scl)  11(90pts,1.08scl)   7(89pts,1.21scl)   4(70pts,0.94scl)   0(25pts,0.83scl) (553 shared points)
10:01:33 [App     ] Reference image   9 sees 9 views:   2(340pts,0.91scl)   8(312pts,1.10scl)  10(283pts,1.05scl)   3(107pts,0.91scl)   1(94pts,0.88scl)  11(90pts,1.08scl)   7(89pts,1.21scl)   4(70pts,0.94scl)   0(25pts,0.83scl) (553 shared points)
10:01:33 [App     ] Reference image   7 sees 8 views:   4(250pts,0.79scl)   8(197pts,0.89scl)   5(177pts,0.82scl)   3(125pts,0.76scl)   9(89pts,0.83scl)   6(48pts,0.91scl)   2(37pts,0.73scl)  10(3pts,0.89scl) (362 shared points)
10:01:33 [App     ] Reference image   2 sees 9 views:   9(340pts,1.10scl)  10(251pts,1.16scl)   8(235pts,1.21scl)   3(125pts,1.01scl)   1(135pts,0.98scl)  11(105pts,1.19scl)   4(50pts,1.03scl)   0(29pts,0.90scl)   7(37pts,1.38scl) (508 shared points)
10:01:33 [App     ] Selecting images for dense reconstruction completed: 12 images (0ms)
[New Thread 0x7fff617fe000 (LWP 80601)]
[Thread 0x7fff617fe000 (LWP 80601) exited]
[New Thread 0x7fff60ffd000 (LWP 80602)]
[New Thread 0x7fff60ffd000 (LWP 80603)]
[Thread 0x7fff60ffd000 (LWP 80602) exited]
[New Thread 0x7fff617fe000 (LWP 80604)]

Thread 43 "-spatial-" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fff60ffd000 (LWP 80603)]
MVS::PatchMatchCUDA::EstimateDepthMap (this=<optimized out>, depthData=...)
    at /data/libraries/openMVS-2.1.0/libs/MVS/PatchMatchCUDA.cpp:341
341					depthNormal.topLeftCorner<3, 1>() = Eigen::Map<const Normal::EVec>(n.ptr());
(gdb) bt
#0  MVS::PatchMatchCUDA::EstimateDepthMap (this=<optimized out>, depthData=...)
    at /data/libraries/openMVS-2.1.0/libs/MVS/PatchMatchCUDA.cpp:341
#1  0x0000555555bc9fac in MVS::DepthMapsData::EstimateDepthMap (this=0x7fffffffc578,
    idxImage=<optimized out>, nGeometricIter=0)
    at /data/libraries/openMVS-2.1.0/libs/Common/List.h:370
#2  0x0000555555bce2e0 in MVS::Scene::DenseReconstructionEstimate (this=<optimized out>,
    pData=0x7fffffffc550) at /data/libraries/openMVS-2.1.0/libs/Common/List.h:370
#3  0x0000555555bcf8df in DenseReconstructionEstimateTmp (arg=<optimized out>)
    at /data/libraries/openMVS-2.1.0/libs/MVS/SceneDensify.cpp:1932
#4  0x00007ffff7eb4ea7 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#5  0x00007fffd4346a2f in clone () from /lib/x86_64-linux-gnu/libc.so.6

Is it necessary to add information about the normals to the dmap file? I supossed that were not necessary because I'm no indicating that I have this information with HeaderDepthDataRaw::HAS_NORMAL. Another thing that should be pointed out is that, at the time that I'm creating the dmap files the images have not got any neighbors defined. Could it be a problem? These neighbors are only used to estimate the depthmaps, right?

I would really appreciate your help on this. I don't know what I'm doing wrong.

Thank you in advance!

@cdcseacave
Copy link
Owner

@adricostas the code is fine, all you need to do is to match the resolution of the depth-maps when you call DenseReconstruction by using the param: --max-resolution 256

@adricostas
Copy link
Author

Thanks for the reply. Unfortunately, even changing the max_resolution parameter the densification doesn't work. Here are the values that I'm using:

        MVS::OPTDENSE::init();
        MVS::OPTDENSE::update();
        MVS::OPTDENSE::nResolutionLevel = 0;          // how many times to scale down the images before point cloud computation
        MVS::OPTDENSE::nMaxResolution = 256;         // do not scale images higher than this resolution
        MVS::OPTDENSE::nMinResolution = 192;          // do not scale images lower than this resolution
        MVS::OPTDENSE::nSubResolutionLevels = 2;      // number of patch-match sub-resolution iterations (0 - disabled)
        MVS::OPTDENSE::nNumViews = 8;                 // number of views used for depth-map estimation (0 - all neighbor views available)
        MVS::OPTDENSE::nMinViewsFuse = 3;             // minimum number of images that agrees with an estimate during fusion in order to consider it inlier (<2 - only merge depth-maps)
        MVS::OPTDENSE::nEstimationIters = 4;          // number of patch-match iterations
        MVS::OPTDENSE::nEstimationGeometricIters = 2; // number of geometric consistent patch-match iterations (0 - disabled)
        MVS::OPTDENSE::nEstimateColors = 2;           // estimate the colors for the dense point-cloud (0 - disabled, 1 - final, 2 - estimate)
        MVS::OPTDENSE::nEstimateNormals = 2;          // estimate the normals for the dense point-cloud (0 - disabled, 1 - final, 2 - estimate)
        MVS::OPTDENSE::nOptimize = 7;                 // flags used to filter the depth-maps after estimation (0 - disabled, 1 - remove-speckles, 2 - fill-gaps, 4 - adjust-filter)
        MVS::OPTDENSE::nIgnoreMaskLabel = -1;         // label value to ignore in the image mask, stored in the MVS scene or next to each image with '.mask.png' extension (<0 - disabled)
        MVS::OPTDENSE::bRemoveDmaps = false;          // remove depth-maps after fusion
        Util::Init();

        bool densificationSuccess = mvsScene->DenseReconstruction();

Any idea?

@cdcseacave
Copy link
Owner

I could have ideas if I know what is not working, an error message, assert, line, etc

@adricostas
Copy link
Author

adricostas commented Sep 18, 2023

The same as at the initial stage:

 received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fff60ffd000 (LWP 80603)]
MVS::PatchMatchCUDA::EstimateDepthMap (this=<optimized out>, depthData=...)
    at /data/libraries/openMVS-2.1.0/libs/MVS/PatchMatchCUDA.cpp:341
341					depthNormal.topLeftCorner<3, 1>() = Eigen::Map<const Normal::EVec>(n.ptr());
(gdb) bt
#0  MVS::PatchMatchCUDA::EstimateDepthMap (this=<optimized out>, depthData=...)
    at /data/libraries/openMVS-2.1.0/libs/MVS/PatchMatchCUDA.cpp:341
#1  0x0000555555bc9fac in MVS::DepthMapsData::EstimateDepthMap (this=0x7fffffffc578,
    idxImage=<optimized out>, nGeometricIter=0)
    at /data/libraries/openMVS-2.1.0/libs/Common/List.h:370
#2  0x0000555555bce2e0 in MVS::Scene::DenseReconstructionEstimate (this=<optimized out>,
    pData=0x7fffffffc550) at /data/libraries/openMVS-2.1.0/libs/Common/List.h:370
#3  0x0000555555bcf8df in DenseReconstructionEstimateTmp (arg=<optimized out>)
    at /data/libraries/openMVS-2.1.0/libs/MVS/SceneDensify.cpp:1932
#4  0x00007ffff7eb4ea7 in start_thread () from /lib/x86_64-linux-gnu/libpthread.so.0
#5  0x00007fffd4346a2f in clone () from /lib/x86_64-linux-gnu/libc.so.6

And this is because depthData.normalMap is empty when this code is executed:

// load depth-map and normal-map into CUDA memory
		for (int r = 0; r < depthData.depthMap.rows; ++r) {
			const int baseIndex = r * depthData.depthMap.cols;
			for (int c = 0; c < depthData.depthMap.cols; ++c) {
				const Normal& n = depthData.normalMap(r, c);				
				const int index = baseIndex + c;
				Point4& depthNormal = depthNormalEstimates[index];
			    depthNormal.topLeftCorner<3, 1>() = Eigen::Map<const Normal::EVec>(n.ptr());
				depthNormal.w() = depthData.depthMap(r, c);
			}
		}

@cdcseacave
Copy link
Owner

first of all why do you even run EstimateDepthMap if you have the depth-maps? do you want to refine them?

@cdcseacave
Copy link
Owner

to fuse them as they are add these params --geometric-iters 0 --postprocess-dmaps 0

@adricostas
Copy link
Author

first of all why do you even run EstimateDepthMap if you have the depth-maps? do you want to refine them?

No, I don't want to refine them, I only want to take advantage of the depthmaps provided by a LIDAR to carry out the reconstruction. In other issues I read that I should convert the depthmaps that I have to .dmap files and that's all what I have done so far. I don't know if I should modify any of the parameters that I use to call the DenseReconstruction funtion regarding the ones that I used to perform the DenseReconstruction when I don't have those depthmaps. (listed in one of my previous comments). So far, I have only changed the value of the max_resolution and min_resolution based on your previous comment. I don't know if I must change more parameters in this case.

to fuse them as they are add these params --geometric-iters 0 --postprocess-dmaps 0

Ok, I will try this!

Thanks

@adricostas
Copy link
Author

Now it is working! Thank you very much for your support!
Just out of curiosity, since the reconstruction that I'm getting using the LIDAR depthmaps is a little bit worse than the one returned when the depthmaps are estimated, is there a way of refining the LIDAR depthmaps as you mentioned in your previous comment?

@cdcseacave
Copy link
Owner

cdcseacave commented Sep 18, 2023 via email

@adricostas
Copy link
Author

adricostas commented Sep 19, 2023

Ok, thanks, your help is really appreciated. And what about using confidence maps together with the depthmaps provided by ARKit? I can get a confidence map with values [0,1,2] that indicate low, medium and high confidence respectively. Is this range compatible with the range of the confidence maps generated by OpenMVS? In case of including the confidence maps into the .dmap files, will they be used for the reconstruction if I only fuse the depthmaps as they are (without refinement)?

@cdcseacave
Copy link
Owner

yes, can be used, but you need to convert them in a [0,1] range, ex: 0.5, 0.7, 0.9

@bkhanal-11
Copy link

@adricostas Hi, you mentioned that you are using custom triangulation and bundle adjustment code to extract the sparse pointcloud from the data obtained through ARKit. I have been trying to do the same. Could you help me with this? Currently I am using poses from camera.transform, but when I try to visualize them, they are inaccurate. I would appretiate if you could share how you computed/optimized the poses obtained from ARKit.

@adricostas
Copy link
Author

Hi @bkhanal-11 ,

Yes, I'm using the poses provided by camera.transform and they are not very accurate. In order to optimize them I'm applying a bundle adjustment after the triangulation, allowing the optimization of the 3D points, poses and focal length.

@bkhanal-11
Copy link

Hi @bkhanal-11 ,

Yes, I'm using the poses provided by camera.transform and they are not very accurate. In order to optimize them I'm applying a bundle adjustment after the triangulation, allowing the optimization of the 3D points, poses and focal length.

Could you share the resources? I am trying to do the same thing for few weeks for my academic project and not getting any result. I would appreciate if you can do that.

@cdcseacave
Copy link
Owner

You can also use Polycam app which records the ARKit data (RGBD images) and there is an importer already in OpenMVS for Polycam projects

@adricostas
Copy link
Author

adricostas commented Dec 4, 2023

Hello,

Now I'm able to use my own depthmaps and confidence maps. The pointcloud seems to be good but the mesh is not, the edges of the screens are projected onto the wall. At first, I thought that it could be related to the fact that the depth values are not good for black objects, so I decided to set to 0 the confidence value for those black pixels, but the issue persists.
image
image

image
image

Is there any parameter of the ReconstructMesh/RefineMesh/TextureMesh functions that can be related with this issue?

@cdcseacave
Copy link
Owner

there is nothing wrong with meshing, the problem you see is that the input point cloud does not represent accurately the scene, the monitors are larger in reality then the point cloud represents them, so the side of them project on the existing 3D data

@adricostas
Copy link
Author

adricostas commented Dec 15, 2023

Hello again,

In a previous comment you said that the confidence values must be in a [0,1] range. However, if I load the confidence maps saved into the .dmap files after executing the densification with depthmap estimation I can see that the values are not in that range. For example:
1, 5.0483069, 5.0695906, 2.0225251, 3.4381022, 3.4898958, 3.5330601, 3.4796228, 5.2911892, 5.4303646, 5.5793405, 5.5250182, 5.5812407, 6.0368762, 5.9885068, 6.1395311, 6.0218611, 7.9737659, 7.0617104, 6.3745222, 6.0646019, 6.9580593, 7.3866162, 7.9089084, 7.2736816, 7.8875203, 7.7201228, 7.8146148, 7.994205, 8.2426958, 8.1242065, 7.8560362, 7.6062765, 7.7271338, 7.6856451, 7.5575194, 7.7892895, 7.8579407, 7.6881709, 7.989953, 8.1339159, 8.224411, 8.2972584, 8.2843409, 8.257967, 8.1948452, 8.0655117, 8.1105509, 8.1168203, 7.987555, 7.4744143, 7.3470759, 7.4429359, 7.3608084, 7.524375, 7.4632635, 7.3884854, 7.255477, 7.5569077, 7.2498341, 7.6209731, 7.71593, 7.8580632, 7.9617243, 7.8899817, 8.2001476, 8.2355967, 8.3117199, 8.2758617, 8.2630291, 8.3665562, 8.2412949, 8.3400803, 8.1813316, 8.2487545, 8.2409811, 8.2706308, 8.244483, 8.1666231, 8.1991138, 8.0261993, 7.3620954, 7.1547971, 7.4995127, 7.5973654, 7.3446236, 7.4550943, 7.1218815, 7.4026203, 7.1257133, 7.2600708, 6.9128304, 6.6415625, 6.7303729, 7.2444487, 7.3918858, 7.558774, 7.2504511, 4.3551507, 0, 4.8897295, 6.5880327, 0, 3.3730402, 0, 0, 0, 0, 0, 0, 4.0555401, 4.1620922, 4.2206154, 1.6849673, 2.3533087, 0.78184462, 3.8277426, 5.5829439, 7.4815526, 7.5697012, 7.30651, 6.6364918, 6.8388777, 6.4330626, 1.8075125, 0, 2.1535058, 4.5627813, 5.3484144, 0.59444714, 5.9300184, 6.3187647, 3.9350061, 3.2421331, 5.7487864, 1.1701021, 6.9918909, 4.3732824, 5.2069545, 0, 0.38182735, 0, 0, 0, 0.87245917, 0, 2.379704, 4.2571478, 7.6528358, 6.2679367, 6.3264341, 7.4724069, 7.4385476, 7.375258, 7.5066323, 7.2340479, 7.2679281, 5.8839655, 5.7965603, 3.8879151, 3.1355267, 3.2605906, 5.933054, 6.1761336, 6.0362225, 6.0593672, 6.365809, 3.3957384, 4.1885295, 0.68125665, 5.500412, 5.6755672, 5.2084894, 4.2836995, 4.5210762, 5.9079604, 6.2796335, 6.8923545, 7.0521898, 6.7968831, 7.0922742, 6.7861562, 6.5700784, 6.0940976, 6.4174914, 6.5802846, 6.0844502, 6.2528706, 6.2330751, 5.9067507, 4.849577, 6.2422915, 5.6693168, 5.8975906, 6.0085521, 6.3508468, 5.5860386, 5.9122629, 6.1860838, 6.1318274, 6.150589, 6.239943, 5.9974422, 5.9547424, 6.0962205, 6.1653175, 6.5562329, 6.6156454, 5.1655059, 5.0708065, 6.5970111, 6.8149738, 3.6255834, 5.7750845, 5.7844472, 5.917984, 5.5443387, 3.9540286, 2.0247207, 4.19065, 0, 4.3368883, 3.7535872, 3.035743, 1.2702851, 3.3791163, 1.3348475, 2.4232755, 0.38400102, 0.07514596, 1.025833, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1.7016785, 1.7002449, 0.054858685, 0, 0, 0, 3.1563268, 3.3699477, 1.8394439, 0.42570138, 0, 0;

I think that these non-normalized values come from the function FilterDepthMaps in which confidence values are added but they are not normalized again. Are these values ok?

for (int i=0; i<sizeRef.height; ++i) {
			for (int j=0; j<sizeRef.width; ++j) {
				const ImageRef xRef(j,i);
				const Depth depth(depthDataRef.depthMap(xRef));
				if (depth == 0) {
					newDepthMap(xRef) = 0;
					newConfMap(xRef) = 0;
					continue;
				}
				ASSERT(depth > 0);
				#if TD_VERBOSE != TD_VERBOSE_OFF
				++nProcessed;
				#endif
				// update best depth and confidence estimate with all estimates
				float posConf(depthDataRef.confMap(xRef)), negConf(0);
				Depth avgDepth(depth*posConf);
				unsigned nPosViews(0), nNegViews(0);
				unsigned n(N);
				do {
					const Depth d(depthMaps[--n](xRef));
					if (d == 0) {
						if (nPosViews + nNegViews + n < nMinViews)
							goto DiscardDepth;
						continue;
					}
					ASSERT(d > 0);
					if (IsDepthSimilar(depth, d, thDepthDiff)) {
						// average similar depths
						const float c(confMaps[n](xRef));
						avgDepth += d*c;
						posConf += c;
						++nPosViews;
					} else {
						// penalize confidence
						if (depth > d) {
							// occlusion
							negConf += confMaps[n](xRef);
						} else {
							// free-space violation
							const DepthData& depthData = arrDepthData[depthDataRef.neighbors[idxNeighbors[n]].idx.ID];
							const Camera& camera = depthData.images.First().camera;
							const Point3 X(cameraRef.TransformPointI2W(Point3(xRef.x,xRef.y,depth)));
							const ImageRef x(ROUND2INT(camera.TransformPointW2I(X)));
							if (depthData.confMap.isInside(x)) {
								const float c(depthData.confMap(x));
								negConf += (c > 0 ? c : confMaps[n](xRef));
							} else
								negConf += confMaps[n](xRef);
						}
						++nNegViews;
					}
				} while (n);
				ASSERT(nPosViews+nNegViews >= nMinViews);
				// if enough good views and positive confidence...
				if (nPosViews >= nMinViewsAdjust && posConf > negConf && ISINSIDE(avgDepth/=posConf, depthDataRef.dMin, depthDataRef.dMax)) {
					// consider this pixel an inlier
					newDepthMap(xRef) = avgDepth;
					newConfMap(xRef) = (posConf - negConf);
					
				} else {
					// consider this pixel an outlier
					DiscardDepth:
					newDepthMap(xRef) = 0;
					newConfMap(xRef) = 0;
					#if TD_VERBOSE != TD_VERBOSE_OFF
					++nDiscarded;
					#endif
				}
			}
		}

This value should be newConfMap(xRef) = (posConf - negConf)/((nPosViews + nNegViews + 1) instead of newConfMap(xRef) = (posConf - negConf)?

Thanks in advance!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants