Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug in se3_exp_map and/or se3_log_map: some transformations are not reconstructed after logarithm and exponentiation #1609

Closed
paucarre opened this issue Aug 12, 2023 · 10 comments

Comments

@paucarre
Copy link

paucarre commented Aug 12, 2023

🐛 Bugs / Unexpected behaviors

There is a bug in se3_exp_map and/or se3_log_map ( I think likely the latter) as there are some transformations that applying the log operation and then the exponential,
that exponential is off by a very big margin to the original transformation.
This is not happening in other libraries ( example below) and thus I think there is a bug in PyTorch3D.

Formally, following assertion is not always met by PyTorch3D:

transformation = exp( log ( transformation ) )
where transformation belongs to SE(3)

Instructions To Reproduce the Issue:

Run the following code snippet, which should pass but it will fail.

import torch
from pytorch3d import transforms
import numpy as np
T = torch.Tensor([[[-0.7384057045,  0.3333132863, -0.5862244964,  0.0000000000],
         [ 0.3520625532, -0.5508944392, -0.7566816807,  0.0000000000],
         [-0.5751599669, -0.7651259303,  0.2894364297,  0.0000000000],
         [-0.1840534210, -0.1836946011,  0.9952554703,  1.0000000000]]])
eq_T = transforms.se3_exp_map(transforms.se3_log_map(T))
assert np.isclose(
        T,
        eq_T,
        rtol=1e-5,
        atol=1e-5, # Note that even with "2", it returns false
    ).all()

Another transformation that causes problems is, for instance:

T = torch.Tensor([[[-0.7400283217,  0.5210028887, -0.4253400862,  0.0000000000],
         [ 0.5329059958,  0.0683888718, -0.8434065580,  0.0000000000],
         [-0.4103286564, -0.8508108258, -0.3282552958,  0.0000000000],
         [-0.1197679043,  0.1799146235,  0.5538908839,  1.0000000000]]])

These transformations come from the forward kinematics of an industrial robot ( CR5 ).
The bug prevents optimizations on-manifold to work.

Note that using the open source library pytransform3d, it does work well. You might want to take a look at their implementation of the logarithm and exponential.
An example of a test that passes is:

import torch
import numpy as np
from pytransform3d.transformations import (
    transform_log_from_transform,
    transform_from_exponential_coordinates,
    exponential_coordinates_from_transform_log,
)
T = torch.Tensor(
       [[-0.7384057045,  0.3520625532, -0.5751599669, -0.1840534210],
        [ 0.3333132863, -0.5508944392, -0.7651259303, -0.1836946011],
        [-0.5862244964, -0.7566816807,  0.2894364297,  0.9952554703],
        [ 0.0000000000,  0.0000000000,  0.0000000000,  1.0000000000]])
vee_log_T = exponential_coordinates_from_transform_log(transform_log_from_transform(T))
eq_T = transform_from_exponential_coordinates(vee_log_T)
assert np.isclose(
    T,
    eq_T,
    rtol=1e-5,
    atol=1e-5, 
).all()

As a final note, as logarithm and exponential can be used in algorithms as basic tool, if they are used in PyTorch3D algorithms, then probably this bug is also creating bugs in other parts of PyTorch3D that will be very difficult to tackle as it affects a small yet significant elements of SE(3).

Comparing the logarithm of PyTorch3D and pytransform3d, I can see differences. Note that here I swapped the 3 first elements of pytransform3d as the twists are the other way around compared to PyTorch3D:

Logarithm from pytransform3d

[ -0.850257456302643, -0.117023736238480,  0.734556674957275,
1.131854534149170,  1.483072876930237, -2.513123512268066 ]

Logarithm from PyTorch3D

[ -0.8343285918235779, -0.11148931086063385, 0.7449967861175537,
   1.1116403341293335, 1.4565861225128174, -2.468240737915039]

Both the angular and linear parts are quite different.

@paucarre paucarre changed the title Transformation is not reconstructed after logarithm and exponentiation Bug in se3_exp_map and/or se3_log_map: some transformations are not reconstructed after logarithm and exponentiation Aug 12, 2023
@bottler
Copy link
Contributor

bottler commented Aug 13, 2023

Thanks, this is interesting. The matrix_to_axis_angle function in rotation_conversions goes a long way round but is a better implementation of rotation part of the calculation. (And yes, there's a recent PR or issue that it is inefficient, but it isn't about to be changed.) transforms.matrix_to_axis_angle(T[:, :3, :3]).tolist() gives [[-1.1318340301513672, -1.4830864667892456, 2.513122797012329]] which is basically the same as the numbers from pytransform3d. Perhaps we should change se3_log_map and so3_log_map, which aren't used elsewhere in pytorch3d, to use it.

(If I change the cos_bound on se3_log_map to 1e-5 or smaller (default is 1e-4) the match is a bit better but still not good.)

@paucarre
Copy link
Author

paucarre commented Aug 14, 2023

Thanks for taking a look at this @bottler !

I'd say that se3_log_map should be first correct and, if and when possible, fast. If se3_log_map is not correct in a substancial amount of cases, regardless of how fast it is, I can't see any use for it.

I guess it would be straightforward to use matrix_to_axis_angle for SO(3) log and then I guess extracting the T(3) log part shouldn't be a big deal, wouldn't it? I can't see any drawback using it in se3_log_map .

Thanks for the cos_bound suggestion, it might be a temporary "fix" up until a fix for se3_log_map is in place.

Finally, for the fix, I'd say that it'd be nice to get some hard unit testing with SE(3) matrices close to singularities so that the bug doesn't appear again in case the algorithm is changed ( maybe to make it faster? )

@paucarre
Copy link
Author

paucarre commented Aug 15, 2023

@bottler I have changed locally my pytorch3d and ran the tests I had, and works well.
I will get ready a PR hopefully during the Weekend, if you are already working on it please tell me as otherwise that'll be duplicating efforts :)

@paucarre
Copy link
Author

paucarre commented Sep 15, 2023

@bottler
I have been working with it and I have a "fix" using quaternions
The problem is that using it in a project I'm developing I discovered that the logarithm also fails with quaternions. This means either the quaternion tansformation is broken or the logarithm on the quaternion is broken.
You can use directly the SE(3) logarithm using quaternions by running:

python -c 'import pytorch3d; print(f"wget https://raw.githubusercontent.com/paucarre/pytorch3d/se3_log_map_fix/pytorch3d/transforms/se3.py -O  {pytorch3d.__path__[0]}/transforms/se3.py")' | bash
python -c 'import pytorch3d; print(f"wget https://raw.githubusercontent.com/paucarre/pytorch3d/se3_log_map_fix/pytorch3d/transforms/so3.py -O  {pytorch3d.__path__[0]}/transforms/so3.py")' | bash

Here you have a code snippet of a pose where the logarithm using quaternions fails:

import torch
from pytorch3d import transforms

a = torch.tensor([[-0.96997839212417602539, -1.11289477348327636719,
         -0.65695625543594360352, -1.23729753494262695312,
          0.16998650133609771729,  1.77770030498504638672]])
error = (a -  transforms.se3_log_map(  transforms.se3_exp_map(a))).abs().sum()

where error is above 17 😨

By using quaternions for the logarithm it does work much better, the frequecy of badly reconstructed poses is significantly reduced. But still with quaternions there are transformations where it breaks quite badly. As far as I know, that piece of code might be used more in Pytorch3D, so you might have a very infrequent bug that when triggers it can be quite damaging.

@csrqli
Copy link

csrqli commented Oct 23, 2023

In my case performing se3_log_map then se3_exp_map gives large bias for the following input SE3 matrix

SE3 = tensor([[ 0.9323, 0.2964, 0.2072, -0.0611],
[ 0.0000, 0.5729, -0.8196, -0.1102],
[ 0.3616, -0.7641, -0.5342, 4.8914],
[ 0.0000, 0.0000, 0.0000, 1.0000]])

@eigenvivek
Copy link

eigenvivek commented Dec 4, 2023

@paucarre I tried your fork but still get high numerical error for some transforms. There is almost certainly a bug in pytorch3d.transforms.matrix_to_axis_angles.

Running this snippet will return conflicting answers between the two packages every 1/5 runs:

from pytorch3d.transforms import matrix_to_axis_angle, random_rotation
from pytransform3d.rotations import compact_axis_angle_from_matrix

R = random_rotation()
compact_axis_angle_from_matrix(R), matrix_to_axis_angle(R).numpy()

For example,

R = torch.tensor(
    [
        [0.9491, -0.3148, 0.0016],
        [-0.1595, -0.4765, 0.8646],
        [-0.2715, -0.8209, -0.5025],
    ]
)

print(compact_axis_angle_from_matrix(R))
# [-2.0759443   0.33628214  0.19135378]

print(matrix_to_axis_angle(R).numpy())
# [ 4.1008744 -0.6643005 -0.3780052]

@paucarre
Copy link
Author

paucarre commented Dec 4, 2023

@eigenvivek There is indeed, just less likle to stumble upon one :)
And that bug might be leaking into other parts of Open3D

@eigenvivek @bottler @Rlee719
I finally implemented my own SE3 with PyTorch Tensors using Implicit Dual Quaternions. I licensed it as MIT so you can do pretty much anything you want. I'm also happy to resolve any doubt/problem/question you might have. You have a few examples in the unit tests

@eigenvivek
Copy link

Thanks for the template @paucarre. I also implemented my own drop-in replacement for the log/exp maps. Basically just pytorch ports of the relevant functions in pytransform3d. In case it's useful for anyone else who stumbles on this thread : https://github.com/eigenvivek/pytorchse3

Happy to make a PR if anyone wants it.

@bottler
Copy link
Contributor

bottler commented Jan 9, 2024

This should be much better after 292acc7 .

@eigenvivek
Copy link

Hi @bottler, thanks for the pinging about new commit! It does improve numerical stability for the matrices above. For example, the test below fails with the latest release but passes with the master branch.

import torch
from pytorch3d.transforms import se3_exp_map, se3_log_map

# Test 1
T = torch.Tensor(
    [
        [
            [-0.7384057045, 0.3333132863, -0.5862244964, 0.0000000000],
            [0.3520625532, -0.5508944392, -0.7566816807, 0.0000000000],
            [-0.5751599669, -0.7651259303, 0.2894364297, 0.0000000000],
            [-0.1840534210, -0.1836946011, 0.9952554703, 1.0000000000],
        ]
    ]
)

eq_T = se3_exp_map(se3_log_map(T))
torch.testing.assert_close(T, eq_T)

# Test 2
T = torch.Tensor(
    [
        [
            [-0.7400283217, 0.5210028887, -0.4253400862, 0.0000000000],
            [0.5329059958, 0.0683888718, -0.8434065580, 0.0000000000],
            [-0.4103286564, -0.8508108258, -0.3282552958, 0.0000000000],
            [-0.1197679043, 0.1799146235, 0.5538908839, 1.0000000000],
        ]
    ]
)

eq_T = se3_exp_map(se3_log_map(T))
torch.testing.assert_close(T, eq_T)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants