Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gnc #617

Merged
merged 45 commits into from
Jan 4, 2021
Merged

Gnc #617

Show file tree
Hide file tree
Changes from 44 commits
Commits
Show all changes
45 commits
Select commit Hold shift + click to select a range
7e29944
Initial design
dellaert Nov 25, 2020
b5d06b5
starting to create test and code for gncParams
Nov 26, 2020
ff40590
added equals in NonlinearOptimizerParams
Nov 27, 2020
90dd2c7
params parsed correctly
Nov 27, 2020
f897fa8
added gnc loop
Nov 27, 2020
a33c50f
now we have very cool tests!
Nov 27, 2020
5222599
2 tests to go
Nov 27, 2020
7c22c2c
simplified small test to make it more understandable
Nov 27, 2020
0f07251
1 test to go
Nov 27, 2020
e991880
stuck on conversion of noise model
Nov 27, 2020
5db6894
finally I have a way to properly change the noise model!
Nov 27, 2020
7ce0641
working on make graph
Nov 27, 2020
556fa83
new constructor test which gets rid of robust loss now passes!
Nov 28, 2020
9e3263f
yay! only the final monster to go!
Nov 28, 2020
dab0090
added verbosity
Nov 28, 2020
ef47741
ladies and gents... GNC!
Nov 28, 2020
c4644a0
added functionality to fix weights
Nov 28, 2020
7699f04
correct formatting
Nov 28, 2020
786d4bb
done - PGO works like a charm!
Nov 28, 2020
fcf2d31
moved class to .h
Dec 5, 2020
db1a366
added comments
Dec 5, 2020
af069ab
fix comment
jingnanshi Dec 6, 2020
47775a7
TLS wip
jingnanshi Dec 7, 2020
9903fb9
tls done except unit tests
jingnanshi Dec 7, 2020
d85d9c6
minor fix
jingnanshi Dec 7, 2020
58e49fc
fix scoping
jingnanshi Dec 7, 2020
d0a81f8
mu initialization test & minor formatting fixes
jingnanshi Dec 7, 2020
9caa0d1
mu update test
jingnanshi Dec 7, 2020
428d17a
correctly check relative difference between mu valus at consecutive i…
jingnanshi Dec 7, 2020
594f63d
test fix
jingnanshi Dec 7, 2020
398c013
more unit tests
jingnanshi Dec 8, 2020
75bd3dc
templating on params is still problematic
Dec 22, 2020
7efd5cc
finally fixed the typedef
Dec 22, 2020
0e09f01
fixed templating, added a strict unit test on inlier threshold
Dec 22, 2020
cd82a56
made function name less ambiguous, added more comments on inlierThres…
Dec 22, 2020
046db87
Fix TLS convergence check
jingnanshi Dec 22, 2020
be5d3d2
update function name
jingnanshi Dec 23, 2020
c571744
fix test
jingnanshi Dec 23, 2020
dc5c769
- fixed stopping conditions
Dec 24, 2020
92ed225
minor fixes
jingnanshi Dec 24, 2020
06dfeb7
moved GncParams to separate file, addressing comments by Frank, 1/n
Dec 29, 2020
eea5276
renamed enum
Dec 29, 2020
dfdd206
addressed all except 2 comments by Frank. waiting for inputs on the 2…
Dec 29, 2020
2467238
moved gncLossType outside params
Dec 30, 2020
248eec8
addressed final comments by Frank
Dec 30, 2020
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions gtsam/nonlinear/GaussNewtonOptimizer.h
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,8 @@ class GaussNewtonOptimizer;
* NonlinearOptimizationParams.
*/
class GTSAM_EXPORT GaussNewtonParams : public NonlinearOptimizerParams {
public:
using OptimizerType = GaussNewtonOptimizer;
};

/**
Expand Down
353 changes: 353 additions & 0 deletions gtsam/nonlinear/GncOptimizer.h
Original file line number Diff line number Diff line change
@@ -0,0 +1,353 @@
/* ----------------------------------------------------------------------------

* GTSAM Copyright 2010, Georgia Tech Research Corporation,
* Atlanta, Georgia 30332-0415
* All Rights Reserved
* Authors: Frank Dellaert, et al. (see THANKS for the full author list)

* See LICENSE for the license information

* -------------------------------------------------------------------------- */

/**
* @file GncOptimizer.h
* @brief The GncOptimizer class
* @author Jingnan Shi
* @author Luca Carlone
* @author Frank Dellaert
*
* Implementation of the paper: Yang, Antonante, Tzoumas, Carlone, "Graduated Non-Convexity for Robust Spatial Perception:
* From Non-Minimal Solvers to Global Outlier Rejection", ICRA/RAL, 2020. (arxiv version: https://arxiv.org/pdf/1909.08605.pdf)
*
* See also:
* Antonante, Tzoumas, Yang, Carlone, "Outlier-Robust Estimation: Hardness, Minimally-Tuned Algorithms, and Applications",
* arxiv: https://arxiv.org/pdf/2007.15109.pdf, 2020.
*/

#pragma once

#include <gtsam/nonlinear/GncParams.h>
#include <gtsam/nonlinear/NonlinearFactorGraph.h>

namespace gtsam {

/* ************************************************************************* */
template<class GncParameters>
class GncOptimizer {
public:
/// For each parameter, specify the corresponding optimizer: e.g., GaussNewtonParams -> GaussNewtonOptimizer.
typedef typename GncParameters::OptimizerType BaseOptimizer;

private:
NonlinearFactorGraph nfg_; ///< Original factor graph to be solved by GNC.
Values state_; ///< Initial values to be used at each iteration by GNC.
GncParameters params_; ///< GNC parameters.
Vector weights_; ///< Weights associated to each factor in GNC (this could be a local variable in optimize, but it is useful to make it accessible from outside).

public:
/// Constructor.
GncOptimizer(const NonlinearFactorGraph& graph, const Values& initialValues,
const GncParameters& params = GncParameters())
: state_(initialValues),
params_(params) {

// make sure all noiseModels are Gaussian or convert to Gaussian
nfg_.resize(graph.size());
for (size_t i = 0; i < graph.size(); i++) {
if (graph[i]) {
NoiseModelFactor::shared_ptr factor = boost::dynamic_pointer_cast<
lucacarlone marked this conversation as resolved.
Show resolved Hide resolved
NoiseModelFactor>(graph[i]);
noiseModel::Robust::shared_ptr robust = boost::dynamic_pointer_cast<
noiseModel::Robust>(factor->noiseModel());
if (robust) { // if the factor has a robust loss, we have to change it:
SharedNoiseModel gaussianNoise = robust->noise();
NoiseModelFactor::shared_ptr gaussianFactor = factor
->cloneWithNewNoiseModel(gaussianNoise);
nfg_[i] = gaussianFactor;
} else { // else we directly push it back
nfg_[i] = factor;
lucacarlone marked this conversation as resolved.
Show resolved Hide resolved
}
}
}
}

/// Access a copy of the internal factor graph.
lucacarlone marked this conversation as resolved.
Show resolved Hide resolved
NonlinearFactorGraph getFactors() const {
return NonlinearFactorGraph(nfg_);
}

/// Access a copy of the internal values.
Values getState() const {
return Values(state_);
}

/// Access a copy of the parameters.
GncParameters getParams() const {
return GncParameters(params_);
}

/// Access a copy of the GNC weights.
Vector getWeights() const {
return weights_;
}

/// Compute optimal solution using graduated non-convexity.
Values optimize() {
// start by assuming all measurements are inliers
weights_ = Vector::Ones(nfg_.size());
BaseOptimizer baseOptimizer(nfg_, state_);
Values result = baseOptimizer.optimize();
double mu = initializeMu();
double prev_cost = nfg_.error(result);
double cost = 0.0; // this will be updated in the main loop

// handle the degenerate case that corresponds to small
// maximum residual errors at initialization
// For GM: if residual error is small, mu -> 0
// For TLS: if residual error is small, mu -> -1
if (mu <= 0) {
if (params_.verbosity >= GncParameters::Verbosity::SUMMARY) {
std::cout << "GNC Optimizer stopped because maximum residual at "
"initialization is small."
<< std::endl;
}
if (params_.verbosity >= GncParameters::Verbosity::VALUES) {
result.print("result\n");
std::cout << "mu: " << mu << std::endl;
}
return result;
}

size_t iter;
for (iter = 0; iter < params_.maxIterations; iter++) {

// display info
if (params_.verbosity >= GncParameters::Verbosity::VALUES) {
std::cout << "iter: " << iter << std::endl;
result.print("result\n");
std::cout << "mu: " << mu << std::endl;
std::cout << "weights: " << weights_ << std::endl;
}
// weights update
weights_ = calculateWeights(result, mu);

// variable/values update
NonlinearFactorGraph graph_iter = this->makeWeightedGraph(weights_);
BaseOptimizer baseOptimizer_iter(graph_iter, state_);
result = baseOptimizer_iter.optimize();

// stopping condition
cost = graph_iter.error(result);
if (checkConvergence(mu, weights_, cost, prev_cost)) {
break;
}

// update mu
mu = updateMu(mu);

// get ready for next iteration
prev_cost = cost;

// display info
if (params_.verbosity >= GncParameters::Verbosity::VALUES) {
std::cout << "previous cost: " << prev_cost << std::endl;
std::cout << "current cost: " << cost << std::endl;
}
}
// display info
if (params_.verbosity >= GncParameters::Verbosity::SUMMARY) {
std::cout << "final iterations: " << iter << std::endl;
std::cout << "final mu: " << mu << std::endl;
std::cout << "final weights: " << weights_ << std::endl;
std::cout << "previous cost: " << prev_cost << std::endl;
std::cout << "current cost: " << cost << std::endl;
}
return result;
}

/// Initialize the gnc parameter mu such that loss is approximately convex (remark 5 in GNC paper).
double initializeMu() const {
// compute largest error across all factors
double rmax_sq = 0.0;
for (size_t i = 0; i < nfg_.size(); i++) {
if (nfg_[i]) {
rmax_sq = std::max(rmax_sq, nfg_[i]->error(state_));
}
}
// set initial mu
switch (params_.lossType) {
case GncLossType::GM:
// surrogate cost is convex for large mu
return 2 * rmax_sq / params_.barcSq; // initial mu
case GncLossType::TLS:
/* initialize mu to the value specified in Remark 5 in GNC paper.
surrogate cost is convex for mu close to zero
degenerate case: 2 * rmax_sq - params_.barcSq < 0 (handled in the main loop)
according to remark mu = params_.barcSq / (2 * rmax_sq - params_.barcSq) = params_.barcSq/ excessResidual
however, if the denominator is 0 or negative, we return mu = -1 which leads to termination of the main GNC loop
*/
return
(2 * rmax_sq - params_.barcSq) > 0 ?
params_.barcSq / (2 * rmax_sq - params_.barcSq) : -1;
default:
throw std::runtime_error(
"GncOptimizer::initializeMu: called with unknown loss type.");
}
}

/// Update the gnc parameter mu to gradually increase nonconvexity.
double updateMu(const double mu) const {
switch (params_.lossType) {
case GncLossType::GM:
// reduce mu, but saturate at 1 (original cost is recovered for mu -> 1)
return std::max(1.0, mu / params_.muStep);
case GncLossType::TLS:
// increases mu at each iteration (original cost is recovered for mu -> inf)
return mu * params_.muStep;
default:
throw std::runtime_error(
"GncOptimizer::updateMu: called with unknown loss type.");
}
}

/// Check if we have reached the value of mu for which the surrogate loss matches the original loss.
bool checkMuConvergence(const double mu) const {
bool muConverged = false;
switch (params_.lossType) {
case GncLossType::GM:
muConverged = std::fabs(mu - 1.0) < 1e-9; // mu=1 recovers the original GM function
break;
case GncLossType::TLS:
muConverged = false; // for TLS there is no stopping condition on mu (it must tend to infinity)
break;
default:
throw std::runtime_error(
"GncOptimizer::checkMuConvergence: called with unknown loss type.");
}
if (muConverged && params_.verbosity >= GncParameters::Verbosity::SUMMARY)
std::cout << "muConverged = true " << std::endl;
return muConverged;
}

/// Check convergence of relative cost differences.
bool checkCostConvergence(const double cost, const double prev_cost) const {
bool costConverged = std::fabs(cost - prev_cost) / std::max(prev_cost, 1e-7)
< params_.relativeCostTol;
if (costConverged && params_.verbosity >= GncParameters::Verbosity::SUMMARY)
std::cout << "checkCostConvergence = true " << std::endl;
return costConverged;
}

/// Check convergence of weights to binary values.
bool checkWeightsConvergence(const Vector& weights) const {
bool weightsConverged = false;
switch (params_.lossType) {
case GncLossType::GM:
weightsConverged = false; // for GM, there is no clear binary convergence for the weights
break;
case GncLossType::TLS:
weightsConverged = true;
for (size_t i = 0; i < weights.size(); i++) {
if (std::fabs(weights[i] - std::round(weights[i]))
> params_.weightsTol) {
weightsConverged = false;
break;
}
}
break;
default:
throw std::runtime_error(
"GncOptimizer::checkWeightsConvergence: called with unknown loss type.");
}
if (weightsConverged
&& params_.verbosity >= GncParameters::Verbosity::SUMMARY)
std::cout << "weightsConverged = true " << std::endl;
return weightsConverged;
}

/// Check for convergence between consecutive GNC iterations.
bool checkConvergence(const double mu, const Vector& weights,
const double cost, const double prev_cost) const {
return checkCostConvergence(cost, prev_cost)
|| checkWeightsConvergence(weights) || checkMuConvergence(mu);
}

/// Create a graph where each factor is weighted by the gnc weights.
NonlinearFactorGraph makeWeightedGraph(const Vector& weights) const {
// make sure all noiseModels are Gaussian or convert to Gaussian
NonlinearFactorGraph newGraph;
newGraph.resize(nfg_.size());
for (size_t i = 0; i < nfg_.size(); i++) {
if (nfg_[i]) {
NoiseModelFactor::shared_ptr factor = boost::dynamic_pointer_cast<
lucacarlone marked this conversation as resolved.
Show resolved Hide resolved
NoiseModelFactor>(nfg_[i]);
noiseModel::Gaussian::shared_ptr noiseModel =
boost::dynamic_pointer_cast<noiseModel::Gaussian>(
factor->noiseModel());
if (noiseModel) {
Matrix newInfo = weights[i] * noiseModel->information();
SharedNoiseModel newNoiseModel = noiseModel::Gaussian::Information(
lucacarlone marked this conversation as resolved.
Show resolved Hide resolved
newInfo);
newGraph[i] = factor->cloneWithNewNoiseModel(newNoiseModel);
} else {
throw std::runtime_error(
"GncOptimizer::makeWeightedGraph: unexpected non-Gaussian noise model.");
}
}
}
return newGraph;
}

/// Calculate gnc weights.
Vector calculateWeights(const Values& currentEstimate, const double mu) {
Vector weights = Vector::Ones(nfg_.size());

// do not update the weights that the user has decided are known inliers
std::vector<size_t> allWeights;
for (size_t k = 0; k < nfg_.size(); k++) {
allWeights.push_back(k);
}
std::vector<size_t> unknownWeights;
std::set_difference(allWeights.begin(), allWeights.end(),
params_.knownInliers.begin(),
params_.knownInliers.end(),
std::inserter(unknownWeights, unknownWeights.begin()));

// update weights of known inlier/outlier measurements
switch (params_.lossType) {
case GncLossType::GM: { // use eq (12) in GNC paper
for (size_t k : unknownWeights) {
if (nfg_[k]) {
double u2_k = nfg_[k]->error(currentEstimate); // squared (and whitened) residual
weights[k] = std::pow(
(mu * params_.barcSq) / (u2_k + mu * params_.barcSq), 2);
}
}
return weights;
}
case GncLossType::TLS: { // use eq (14) in GNC paper
double upperbound = (mu + 1) / mu * params_.barcSq;
double lowerbound = mu / (mu + 1) * params_.barcSq;
for (size_t k : unknownWeights) {
if (nfg_[k]) {
double u2_k = nfg_[k]->error(currentEstimate); // squared (and whitened) residual
if (u2_k >= upperbound) {
weights[k] = 0;
} else if (u2_k <= lowerbound) {
weights[k] = 1;
} else {
weights[k] = std::sqrt(params_.barcSq * mu * (mu + 1) / u2_k)
- mu;
}
}
}
return weights;
}
default:
throw std::runtime_error(
"GncOptimizer::calculateWeights: called with unknown loss type.");
}
}
};

}
Loading