You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The EfficientLoFTR model was proposed in [Efficient LoFTR: Semi-Dense Local Feature Matching with Sparse-Like Speed](https://arxiv.org/abs/2403.04765) by Yifan Wang, Xingyi He, Sida Peng, Dongli Tan and Xiaowei Zhou.
25
-
26
-
This model consists of matching two images together by finding pixel correspondences. It can be used to estimate the pose between them.
27
-
This model is useful for tasks such as image matching, homography estimation, etc.
21
+
# EfficientLoFTR
28
22
29
-
The abstract from the paper is the following:
23
+
[EfficientLoFTR](https://huggingface.co/papers/2403.04765) is an efficient detector-free local feature matching method that produces semi-dense matches across images with sparse-like speed. It builds upon the original [LoFTR](https://huggingface.co/papers/2104.00680) architecture but introduces significant improvements for both efficiency and accuracy. The key innovation is an aggregated attention mechanism with adaptive token selection that makes the model ~2.5× faster than LoFTR while achieving higher accuracy. EfficientLoFTR can even surpass state-of-the-art efficient sparse matching pipelines like SuperPoint + LightGlue in terms of speed, making it suitable for large-scale or latency-sensitive applications such as image retrieval and 3D reconstruction.
30
24
31
-
*We present a novel method for efficiently producing semidense matches across images. Previous detector-free matcher
32
-
LoFTR has shown remarkable matching capability in handling large-viewpoint change and texture-poor scenarios but suffers
33
-
from low efficiency. We revisit its design choices and derive multiple improvements for both efficiency and accuracy.
34
-
One key observation is that performing the transformer over the entire feature map is redundant due to shared local
35
-
information, therefore we propose an aggregated attention mechanism with adaptive token selection for efficiency.
36
-
Furthermore, we find spatial variance exists in LoFTR’s fine correlation module, which is adverse to matching accuracy.
37
-
A novel two-stage correlation layer is proposed to achieve accurate subpixel correspondences for accuracy improvement.
38
-
Our efficiency optimized model is ∼ 2.5× faster than LoFTR which can even surpass state-of-the-art efficient sparse
39
-
matching pipeline SuperPoint + LightGlue. Moreover, extensive experiments show that our method can achieve higher
40
-
accuracy compared with competitive semi-dense matchers, with considerable efficiency benefits. This opens up exciting
41
-
prospects for large-scale or latency-sensitive applications such as image retrieval and 3D reconstruction.
f"Keypoint at coordinate {keypoint0.numpy()} in the first image matches with keypoint at coordinate {keypoint1.numpy()} in the second image with a score of {matching_score}."
- EfficientLoFTR is designed for efficiency while maintaining high accuracy. It uses an aggregated attention mechanism with adaptive token selection to reduce computational overhead compared to the original LoFTR.
66
+
67
+
```py
68
+
from transformers import AutoImageProcessor, AutoModelForKeypointMatching
- The model produces semi-dense matches, offering a good balance between the density of matches and computational efficiency. It excels in handling large viewpoint changes and texture-poor scenarios.
88
+
89
+
- For better visualization and analysis, use the [`EfficientLoFTRImageProcessor.post_process_keypoint_matching`] method to get matches in a more readable format.
90
+
91
+
```py
92
+
# Process outputs for visualization
93
+
image_sizes = [[(image.height, image.width) for image in images]]
- EfficientLoFTR uses a novel two-stage correlation layer that achieves accurate subpixel correspondences, improving upon the original LoFTR's fine correlation module.
0 commit comments