Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adds support for energy based learning with NLL loss (LEO) #30

Merged
merged 11 commits into from
Dec 27, 2021

Conversation

psodhi
Copy link
Contributor

@psodhi psodhi commented Dec 14, 2021

This PR introduces LEO, learning energy-based models in optimization (https://arxiv.org/abs/2108.02274). LEO is a method to learn models end-to-end within second-order optimizers like Gauss-Newton. The main difference is that instead of unrolling the optimizer and minimizing the MSE tracking loss, this introduces the NLL energy-based loss that does not backpropagate through the optimizer. It requires low-energy samples from the optimizer, pushing up the energy of optimizer samples and pushing down the energy of ground truth samples.

To execute it, run
python examples/state_estimation_2d.py with learning_method="leo"

This should update the learnable cost weights so that the optimizer trajectory (orange) matches the ground truth trajectory (green).

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 14, 2021
@psodhi psodhi marked this pull request as draft December 14, 2021 16:49
@psodhi psodhi linked an issue Dec 14, 2021 that may be closed by this pull request
@psodhi psodhi requested a review from luisenp December 14, 2021 17:49
@psodhi psodhi marked this pull request as ready for review December 14, 2021 17:49
@mhmukadam mhmukadam added this to the 0.1.0-b.2 milestone Dec 17, 2021
Copy link
Member

@mhmukadam mhmukadam left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! The API looks good -- just a sampling function in the optimizer class. Added some comments to address. The GPU test is failing due to cholesky upper not available. We can add support for sparse solvers and optimize performance in later PRs.

Copy link
Contributor

@luisenp luisenp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great job on this PR! I see all the functionality is there, but I propose we shuffle some things around a bit. I'm OK leaving that as a separate PR, but we should definitely make issues for the comments so we don't forget.

Regarding the broken test, upper= kwarg for cholesky was only introduced in torch 1.10.0. It used to be in torch.cholesky, but it was removed in torch.linalg.cholesky in v1.9.0 (see screenshot).

For now, can you update the call to match the one used in v1.9.0 so that the test passes?
image

theseus/optimizer/nonlinear/gauss_newton.py Outdated Show resolved Hide resolved
theseus/optimizer/nonlinear/levenberg_marquardt.py Outdated Show resolved Hide resolved
theseus/optimizer/nonlinear/nonlinear_optimizer.py Outdated Show resolved Hide resolved
theseus/tests/test_theseus_layer.py Outdated Show resolved Hide resolved
theseus/optimizer/nonlinear/gauss_newton.py Outdated Show resolved Hide resolved
theseus/tests/test_theseus_layer.py Show resolved Hide resolved
theseus/tests/test_theseus_layer.py Outdated Show resolved Hide resolved
examples/state_estimation_2d.py Show resolved Hide resolved
@mhmukadam mhmukadam merged commit b021b08 into main Dec 27, 2021
@mhmukadam mhmukadam deleted the psodhi.leo branch December 27, 2021 16:03
suddhu pushed a commit to suddhu/theseus that referenced this pull request Jan 21, 2023
…esearch#30)

* add tests for leo with GN/LM optimizers
* add sampler to GN/LM optimizers
* run leo on 2d state estimation, add viz, learning_method options
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Energy based learning with NLL loss (LEO) support
4 participants