You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My camera is mounted on the robot, and hand-eye calibration can obtain the extrinsic parameters between the robot and the camera.
Suppose I have a set of scans at different positions, scan1-scanN. For each scan, I can directly read out the camera's pose P_i. the transformation between pose i and j can be got H_i2j=P_j.inverse()*P_i.
it also can be calculated from scans_i and scan_j by fpfh algorithm or other methods. i.e. H_est_i2j=fpfh(scan_i,scan_j)
Due to the errors in hand-eye calibration, the scans are not perfectly aligned when transform them into robot's base frame using H_i2j.
I would like to build pose graph to optimize the pose. When constructing the pose graph, should the edges be set to H_i2j, and uncertain set to false? Or should the edges be set to H_est_i2j, and uncertain set to ture.
Thanks for you reply.
The text was updated successfully, but these errors were encountered:
The poses you trust the most should receive uncertain = False, because they will be used in the g2o optimization as a kind of backbone for the poses. When we have poses i_j (the edges of the optimization graph) that connect two non-adjacent stations, i.e. j-i > |1|, it is assumed, as a simple heuristic, that these edges have less confidence than the others, because they were not obtained sequentially, so their scans will probably have less overlap. Sorry if that didn't help, but I didn't quite understand your geometric configuration. For proper optimization, you need at least 3 poses, two of them odometric and one loop closure that will receive uncertain = True.
Checklist
main
branch).My Question
To whom it may concern,
My camera is mounted on the robot, and hand-eye calibration can obtain the extrinsic parameters between the robot and the camera.
Suppose I have a set of scans at different positions, scan1-scanN. For each scan, I can directly read out the camera's pose P_i. the transformation between pose i and j can be got
H_i2j=P_j.inverse()*P_i
.it also can be calculated from scans_i and scan_j by fpfh algorithm or other methods. i.e.
H_est_i2j=fpfh(scan_i,scan_j)
Due to the errors in hand-eye calibration, the scans are not perfectly aligned when transform them into robot's base frame using
H_i2j
.I would like to build pose graph to optimize the pose. When constructing the pose graph, should the edges be set to H_i2j, and uncertain set to false? Or should the edges be set to
H_est_i2j
, and uncertain set to ture.Thanks for you reply.
The text was updated successfully, but these errors were encountered: