-
Notifications
You must be signed in to change notification settings - Fork 767
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix cal3 fisheye jacobian #902
Changes from all commits
d9c770d
ab92ba7
9440ff6
0a1fead
1640f17
8df2c70
f844481
91103d5
c0219c1
0d01e48
8c2ea78
e1db2be
2763bd8
296c937
8a6b2aa
8846324
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -17,6 +17,15 @@ | |
from gtsam.utils.test_case import GtsamTestCase | ||
from gtsam.symbol_shorthand import K, L, P | ||
|
||
|
||
def ulp(ftype=np.float64): | ||
""" | ||
Unit in the last place of floating point datatypes | ||
""" | ||
f = np.finfo(ftype) | ||
return f.tiny / ftype(1 << f.nmant) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is awesome but is it physically plausible? I don't expect the radial value to be 1e-154 given today's manufacturing precisions. Using a reasonable value of say 1e-20 should be equivalent? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. The goal here is to use the ulp value to test for a possible overflow in the implementation, independent of the floating point datatype. The value of ulp() for julia> eps(nextfloat(Float16(0.0)))
Float16(6.0e-8)
julia> eps(nextfloat(Float32(0.0)))
1.0f-45
julia> eps(nextfloat(Float64(0.0)))
5.0e-324 The f.tiny provides the smallest value, at which the floating point number can still be represented as normalized value. In our case, it defines the smallest denormalized value ulp, such that 0+ulp > 0. In simple terms, when numbers are denormalized, calculations are done in fixed-point arithmetic, using the smallest exponent for the mantissa. This value is used in the test to check whether overflow occurs. In the original implementation, setting x=ulp() and y=0, resulted in an overflow returning a NaN Jacobian. In terms of the numeric resolution of double float, a value of 1e-20 is a huge number. For half-float, it is a too small number to represent. The test would not catch any critical overflow condition. (Same for using f.tiny - it does not catch an overflow for computing the Jacobian) Physically, thinking in terms of taylor expansion, a radius of 1e-20 the Jacobian would be sufficient to be in the paraxial region - as noted, the implementation then depends on the datatype. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Awesome, this is a great explanation. |
||
|
||
|
||
class TestCal3Fisheye(GtsamTestCase): | ||
|
||
@classmethod | ||
|
@@ -105,6 +114,71 @@ def test_sfm_factor2(self): | |
score = graph.error(state) | ||
self.assertAlmostEqual(score, 0) | ||
|
||
def test_jacobian_on_axis(self): | ||
"""Check of jacobian at optical axis""" | ||
obj_point_on_axis = np.array([0, 0, 1]) | ||
img_point = np.array([0, 0]) | ||
f, z, H = self.evaluate_jacobian(obj_point_on_axis, img_point) | ||
self.assertAlmostEqual(f, 0) | ||
self.gtsamAssertEquals(z, np.zeros(2)) | ||
self.gtsamAssertEquals(H @ H.T, 3*np.eye(2)) | ||
|
||
def test_jacobian_convergence(self): | ||
"""Test stability of jacobian close to optical axis""" | ||
t = ulp(np.float64) | ||
obj_point_close_to_axis = np.array([t, 0, 1]) | ||
img_point = np.array([np.sqrt(t), 0]) | ||
f, z, H = self.evaluate_jacobian(obj_point_close_to_axis, img_point) | ||
self.assertAlmostEqual(f, 0) | ||
self.gtsamAssertEquals(z, np.zeros(2)) | ||
self.gtsamAssertEquals(H @ H.T, 3*np.eye(2)) | ||
|
||
# With a height of sqrt(ulp), this may cause an overflow | ||
t = ulp(np.float64) | ||
obj_point_close_to_axis = np.array([np.sqrt(t), 0, 1]) | ||
img_point = np.array([np.sqrt(t), 0]) | ||
f, z, H = self.evaluate_jacobian(obj_point_close_to_axis, img_point) | ||
self.assertAlmostEqual(f, 0) | ||
self.gtsamAssertEquals(z, np.zeros(2)) | ||
self.gtsamAssertEquals(H @ H.T, 3*np.eye(2)) | ||
|
||
def test_scaling_factor(self): | ||
"""Check convergence of atan2(r, z)/r ~ 1/z for small r""" | ||
r = ulp(np.float64) | ||
s = np.arctan(r) / r | ||
self.assertEqual(s, 1.0) | ||
z = 1 | ||
s = self.scaling_factor(r, z) | ||
self.assertEqual(s, 1.0/z) | ||
z = 2 | ||
s = self.scaling_factor(r, z) | ||
self.assertEqual(s, 1.0/z) | ||
s = self.scaling_factor(2*r, z) | ||
self.assertEqual(s, 1.0/z) | ||
|
||
@staticmethod | ||
def scaling_factor(r, z): | ||
"""Projection factor theta/r for equidistant fisheye lens model""" | ||
return np.arctan2(r, z) / r if r/z != 0 else 1.0/z | ||
|
||
@staticmethod | ||
def evaluate_jacobian(obj_point, img_point): | ||
"""Evaluate jacobian at given object point""" | ||
pose = gtsam.Pose3() | ||
camera = gtsam.Cal3Fisheye() | ||
state = gtsam.Values() | ||
camera_key, pose_key, landmark_key = K(0), P(0), L(0) | ||
state.insert_point3(landmark_key, obj_point) | ||
state.insert_pose3(pose_key, pose) | ||
g = gtsam.NonlinearFactorGraph() | ||
noise_model = gtsam.noiseModel.Unit.Create(2) | ||
factor = gtsam.GenericProjectionFactorCal3Fisheye(img_point, noise_model, pose_key, landmark_key, camera) | ||
g.add(factor) | ||
f = g.error(state) | ||
gaussian_factor_graph = g.linearize(state) | ||
H, z = gaussian_factor_graph.jacobian() | ||
return f, z, H | ||
|
||
@unittest.skip("triangulatePoint3 currently seems to require perspective projections.") | ||
def test_triangulation_skipped(self): | ||
"""Estimate spatial point from image measurements""" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this should be
r2 < 1e-7
or some other tolerance.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The condition
r2>0
implies the resolution of the image radius r to be abovesqrt(std::numeric_limits<double>::denorm_min())
~ 1.49e-154. By avoiding to compute the square or divisor of two small values, the resulting gradient should be reasonably accurate.I implemented a python testcase
test_jacobian_convergence
intest_Cal3Fisheye.py
to cover this situation.In the current implementation of the jacobian, this results in a overflow or division by zero error, if the square radius r2 is of the size of 1 ulp (unit of least precision in C++ std::numeric_limits::denorm_min() ~ 4.94e-324).
I suggest the refactoring in provided patch below, avoiding the offending division by 1/r2 (or multiplication with rrdiv).
Eventually, the interface design might be extended supporting incident angles above 90 deg. It may be beneficial to generalize the formulation of derivatives, to cover the case zi != 1. The scaling function needs to be generalized to use atan2(r, z)/r instead of atan(r)/r. From numeric tests, is seems fine to implement the scaling for (r/z != 0). See the
test_scaling_factor
unit test to cover this case.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@roderick-koehle I believe @ProfFan was referring to the fact that doing a
r2==0
check is numerically unstable since that check will fail forr2=1e-15
but the value of r2 in this case is still small enough to cause various issues like overflow and bad jacobians.This is a recommendation for doing a numerical check rather than an algebraic check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the late reply.
An arbitrary fixed threshold to select the paraxial approximation I would not recommend.
The critical numerically unstable calculation is the calculation of the unit directions.
c, s = x/r, y/r
this problem is related to computing the givens rotation, e.g. used in the blas library.
Once the unit direction is known, the sine and cosine will be a normalised number and no overflow/underflow issues can occur.
For a more in depth discussion about the issues, see:
Bindel, Demmel, Kahan, "On Computing Givens Rotations Reliably and Efficiently"
The current suggested implementation is simple and numerically stable, this is demonstrated and covered by the provided python tests. In this case, using
(r2==0)
is a well defined lower threshold imposed onr2
being independent of the floating point datatype being used.Alternatively, for using a higher threshold, e.g. for
1/3 r2 < 0.5 eps
with eps being the machine precision, the distortion induced by thearctan(xi)/xi ~ 1+1/3 x^2
becomes equal to one and the paraxial approximation can be used. For this choice, k1 should not exceed 1/3 (the fist taylor coefficient of the tan series development) andz
is assumed 1. See the boost sinc_pi implementation to illustrate this technique. Note the discussion, https://stackoverflow.com/questions/47215765/is-boostmathsinc-pi-unnecessarily-complicatedabout using taylor bounds.
My suggestion is to either stay with the simple type-independent scheme using the condition
r2==0
, or compute the direction cosines using a stable unit direction implementation, similar to the givens rotation implementation as proposed by the publication by Blink. Then, no paraxial case is necessary.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay this makes sense. Thanks for the explanation.
Can you please address the other comments as well?