-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add robust covariance matrix calculation to normal_3d.h #3169
Conversation
If points used for a covariance matrix calculation are far away from the origin, numerical issues can lead to unreliable results. For normal estimation a fix was implemented, that subtracts the center point of the neighborhood. Thus all values are close to zero and no numerical issues occur.
Originally we considered integrating this fix into the "main" implementation of Speaking of runtime, here are the results of some of the experiments that Dominik did: @SergioRAgostinho @msy22 Any thoughts on this? @dominiknatter right now you simply added a new function. We will need to update the normal estimation code to actually use it. Also a short unit test and a few more lines in the documentation will be needed, but more on that later. |
The run time comparison looks good! I am curious as to why the proposed code is actually faster than the existing code for the Kinect and Clock Tower data sets. As for how to implement the fix... my thoughts are:
In summary: Make the proposed code the default for @taketwo and @SergioRAgostinho, you both have far greater experience with PCL and large code bases than me, so feel free to correct me if any of my reasoning or assumptions are wrong. |
Thank you for the introduction. I saw this PR but was unsure if this was from your colleague since it was only targeting normal estimation.
Currently you are creating a specialized replica of an existing functionality and it would obviously be great if we could manage to do everything in a unified way, for maintainability's sake. I need to have a better look to see if I can suggest something. However, given your previous brainstorm, I'm not very confident. The benchmarks look pretty. It would be good to have @msy22 actually run his error benchmarks to confirm this actually gets rid of the original problem. @msy22 : I agree with all your assessments. As is, I'm a little sad to leave this new method just available for normal estimation.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Brief review. You might want to run your timing benchmarks on some of the changes I proposed. They are based on some expectations I have, but they might actually hinder performance without me being aware.
* \return size of the input cloud. | ||
* \ingroup common | ||
*/ | ||
inline unsigned int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a big function. This inline
is uncalled for.
Eigen::Matrix<float, 4, 1> ¢roid) | ||
{ | ||
// create the buffer on the stack which is much faster than using cloud[indices[i]] and centroid as a buffer | ||
Eigen::Matrix<float, 1, 9, Eigen::RowMajor> accu = Eigen::Matrix<float, 1, 9, Eigen::RowMajor>::Zero (); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no need to explicitly specify row/col major in a vector. Also, you're accessing all element individually in the code bellow. A std::array
would be just fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This (and the previous inline
) all comes directly from the original implementation. If we want to change, then I'd change both.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's update both then.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We need a decision first whether we'll keep both :)
// create the buffer on the stack which is much faster than using cloud[indices[i]] and centroid as a buffer | ||
Eigen::Matrix<float, 1, 9, Eigen::RowMajor> accu = Eigen::Matrix<float, 1, 9, Eigen::RowMajor>::Zero (); | ||
size_t point_count; | ||
PointInT center = cloud[indices[0]]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Take a const reference the point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The case if indices is an empty vector, e.g. no neighbors, must be considered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add that extra logic then. My original inline comment is still applicable.
Eigen::Matrix<float, 1, 9, Eigen::RowMajor> accu = Eigen::Matrix<float, 1, 9, Eigen::RowMajor>::Zero (); | ||
size_t point_count; | ||
PointInT center = cloud[indices[0]]; | ||
PointInT p; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Under normal circumstances p
should be declared inside the for-loop.
//const PointT& point = cloud[*iIt]; | ||
p.x = cloud[index].x - center.x; | ||
p.y = cloud[index].y - center.y; | ||
p.z = cloud[index].z - center.z; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any chance this can be vectorized with?
pcl/common/include/pcl/impl/point_types.hpp
Line 192 in 861839f
inline pcl::Vector4fMap getVector4fMap () { return (pcl::Vector4fMap (data)); } \ |
p.getVector4fMap () = center.getVector4fMap ();
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We experimented with this and also 3fMap version and there was no difference in performance.
It's faster on Kinect because we eliminated the
We ran tests comparing the output of this fix and "ideal fix" (where we subtract actual centroid), they are the same down to fractions of a degree.
I also tend to agree. However, as I mentioned before, the boost in performance for Kinect clouds is actually unrelated to the fix and comes from adding an assumption that is valid for our normal estimation approach, but may not be true in general case. |
This pull request has been automatically marked as stale because it hasn't had Come back whenever you have time. We look forward to your contribution. |
accu [1] += p.x * p.y; | ||
accu [2] += p.x * p.z; | ||
accu [3] += p.y * p.y; | ||
accu [4] += p.z * p.z; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, I think there is a bug in this line, shouldn't it rather be:
accu [4] += p.y * p.z;
?
Hi, the Generalized ICP also has embedded covariances computation, could it also benefit from this implementation? Regards |
Marking this as stale due to 30 days of inactivity. Commenting or adding a new commit to the pull request will revert this. |
The recent stale bot messages brought this up to my attention again, and being the author of this pull request I feel some responsbility. I am unexperienced with PCL, so I gotta ask: did I miss out on some task to be done from my end that caused this topic to turn stale in the first place? As far as I understood it, first a distinct decision is needed whether to tailor this to normal estimation only or to replace the default |
Both parties are at fault. While PCL failed to reach a decision on the placement of the improvement, both you failed to keep the PR updated with your inputs and code updates. Neglect turned the PR stale. However, if you're interested in carrying this forwards, I'd recommend taking a look at #1407. That way both the following concerns would be assuaged.
Though, I'd recommend to start a new PR and close this one if you do decide to integrate. Else this PR is alive again, and you can continue with your contribution here. |
@kunaltyagi are there any current PRs that you recommend looking at to get these changes merged, or still #1407? I'm not super well versed with the internals of PCL, but I'd like to start getting involved with the project! |
Superseded by pull request #4983 |
If points used for a covariance matrix calculation are far away from the origin, numerical issues can lead to unreliable results, as discussed in #2403 . For normal estimation a fix was implemented, that subtracts the center point of the neighborhood prior to the covariance calculation. Thus all values are close to zero and no numerical issues occur. This fix is based on the original function in centroid.h but was tailored to normal estimation, i.e. finite neighborhoods are expected. Thus it's implemented in normal_3d.h.
Various experiments were conducted to check the functionality. Even if point clouds are shifted far away from the origin, this fix results in reliable normals whereas the current implementation leads to messy normals. Also in terms of run time the code can compete and is even faster for non-dense clouds, as the check for this property is omitted (not needed because finite neighborhoods are a prerequisite in normal estimation).