Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test results on 3DMatch dataset are different from the paper #31

Open
SheldonFung98 opened this issue Feb 18, 2023 · 2 comments
Open

Test results on 3DMatch dataset are different from the paper #31

SheldonFung98 opened this issue Feb 18, 2023 · 2 comments

Comments

@SheldonFung98
Copy link

Hi! Amazing work!
I download the pre-trained model and evaluate the 3D match dataset.
python3 main.py configs/test/3dmatch.yaml

Note that I have already changed the line in the configuration file (configs/test/3dmatch.yaml) according to the issue #8
max_condition_num: 30

The results are as follows with the 3DMatch benchmark:
average registration recall: 0.8299445471349353 tensor(0.9797, device='cuda:0') tensor(0.6372, device='cuda:0')

Could you please tell me what caused the results?
Thanks in advance!

@rabbityl
Copy link
Owner

maybe try 'max_condition_num: 10', the RANSAC non deterministics could also lead to different results

@SheldonFung98
Copy link
Author

maybe try 'max_condition_num: 10', the RANSAC non deterministics could also lead to different results

Thanks for your reply!
I edited the config file accordingly and the test results are as follows:
average registration recall: 0.829328404189772 tensor(0.9784, device='cuda:0') tensor(0.6173, device='cuda:0')

And I also wonder why don't you use the Procrustes to make predictions directly? Is it because it yields worse results?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants