-
Notifications
You must be signed in to change notification settings - Fork 767
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mEstimator Error with Trust Region Optimizers #1129
Comments
@yetongumich may have some insights here? I am not really sure as last time I looked at robust is about 2 years ago... |
Let's do a quick chat about this tonight? |
Sure
|
I actually stumbled upon this yesterday doing some robust SfM stuff. Yes this is indeed annoying.... |
@DanMcGann Thanks for the detailed report and analysis!
You have all the details fresher... would you mind to quickly write down (just as a comment here) what two functions are you thinking of? Then we probably might find a way to make it less redundant via a virtual method or alike... |
My initial thoughts were to have
However, taking a look at the code again, I cannot figure out how I previously came to the conclusion that this would be necessary.
This fresh look leads me towards the same fix that got proposed in #1161. |
Description
When using robust noise models trust region optimizers use an incorrect value for system error when evaluating convergence and step rejection criteria. Specifically these optimizer evaluate convergence on the sum of weighted residuals rather than the robust cost value . Where I have used the same notation style as the robust estimator reference provided in the documentation (LINK) but the exact form computed by GTSAM. Plugging in the definition for the weight we can quickly see the inequality of these terms for many mEstimator forms.
For example Cauchy (c=1 scale value for brevity)
plugging in form of Cauchy loss we get
The discrepancy can greatly affect determination of convergence and step acceptance by our trust region optimizers, especially for the mEstimators that are asymptoticly constant like Geman McClure (see below). For these estimators the system error (in its current implementation) can increase as the estimate gets closer to the mean meaning steps will be rejected by LM or DogLeg and the system will preemptively converge.
Steps to reproduce
Expected behavior
I would expect that when evaluating system error optimizers would use the actual robust cost, rather than the value of the weighted squared residual. Parallel, I would expect to see better convergence of trust region optimizers than empirically found. For emperical evaluation of convergence see script below.
Convergence Evaluation Script
Environment
Additional information
I was tempted to just submit a PR with a fix, as this is a fairly small (though important bug). However, due to implementation details it is not a trivial fix. It could be fixed, by having two functions in the factor interface one used for system linearization, the other used for evaluating system error. However, this would be redundant for most cases. Alternatively, would be to change the entire implementation scheme for robust estimators, having them wrap factors rather than noiseModels, such that weighting can be applied during linearization, but error is still evaluated using the robust cost. This however, would be a massive interface change. So I opted to just submit a bug report.
The text was updated successfully, but these errors were encountered: