Skip to content

Commit 389df3d

Browse files
committed
docs: Update EfficientLoFTR documentation
1 parent c338fd4 commit 389df3d

File tree

1 file changed

+90
-55
lines changed

1 file changed

+90
-55
lines changed

docs/source/en/model_doc/efficientloftr.md

Lines changed: 90 additions & 55 deletions
Original file line numberDiff line numberDiff line change
@@ -10,84 +10,114 @@ specific language governing permissions and limitations under the License.
1010
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
1111
rendered properly in your Markdown viewer.
1212
13-
1413
-->
1514

16-
# EfficientLoFTR
17-
18-
<div class="flex flex-wrap space-x-1">
19-
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white">
15+
<div style="float: right;">
16+
<div class="flex flex-wrap space-x-1">
17+
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white" >
18+
</div>
2019
</div>
2120

22-
## Overview
23-
24-
The EfficientLoFTR model was proposed in [Efficient LoFTR: Semi-Dense Local Feature Matching with Sparse-Like Speed](https://arxiv.org/abs/2403.04765) by Yifan Wang, Xingyi He, Sida Peng, Dongli Tan and Xiaowei Zhou.
25-
26-
This model consists of matching two images together by finding pixel correspondences. It can be used to estimate the pose between them.
27-
This model is useful for tasks such as image matching, homography estimation, etc.
21+
# EfficientLoFTR
2822

29-
The abstract from the paper is the following:
23+
[EfficientLoFTR](https://huggingface.co/papers/2403.04765) is an efficient detector-free local feature matching method that produces semi-dense matches across images with sparse-like speed. It builds upon the original [LoFTR](https://huggingface.co/papers/2104.00680) architecture but introduces significant improvements for both efficiency and accuracy. The key innovation is an aggregated attention mechanism with adaptive token selection that makes the model ~2.5× faster than LoFTR while achieving higher accuracy. EfficientLoFTR can even surpass state-of-the-art efficient sparse matching pipelines like SuperPoint + LightGlue in terms of speed, making it suitable for large-scale or latency-sensitive applications such as image retrieval and 3D reconstruction.
3024

31-
*We present a novel method for efficiently producing semidense matches across images. Previous detector-free matcher
32-
LoFTR has shown remarkable matching capability in handling large-viewpoint change and texture-poor scenarios but suffers
33-
from low efficiency. We revisit its design choices and derive multiple improvements for both efficiency and accuracy.
34-
One key observation is that performing the transformer over the entire feature map is redundant due to shared local
35-
information, therefore we propose an aggregated attention mechanism with adaptive token selection for efficiency.
36-
Furthermore, we find spatial variance exists in LoFTR’s fine correlation module, which is adverse to matching accuracy.
37-
A novel two-stage correlation layer is proposed to achieve accurate subpixel correspondences for accuracy improvement.
38-
Our efficiency optimized model is ∼ 2.5× faster than LoFTR which can even surpass state-of-the-art efficient sparse
39-
matching pipeline SuperPoint + LightGlue. Moreover, extensive experiments show that our method can achieve higher
40-
accuracy compared with competitive semi-dense matchers, with considerable efficiency benefits. This opens up exciting
41-
prospects for large-scale or latency-sensitive applications such as image retrieval and 3D reconstruction.
42-
Project page: [https://zju3dv.github.io/efficientloftr/](https://zju3dv.github.io/efficientloftr/).*
25+
> [!TIP]
26+
> This model was contributed by [stevenbucaille](https://huggingface.co/stevenbucaille).
27+
>
28+
> Click on the EfficientLoFTR models in the right sidebar for more examples of how to apply EfficientLoFTR to different computer vision tasks.
4329
44-
## How to use
30+
The example below demonstrates how to match keypoints between two images with the [`AutoModelForKeypointMatching`] class.
4531

46-
Here is a quick example of using the model.
47-
```python
48-
import torch
32+
<hfoptions id="usage">
33+
<hfoption id="AutoModel">
4934

35+
```py
5036
from transformers import AutoImageProcessor, AutoModelForKeypointMatching
51-
from transformers.image_utils import load_image
52-
37+
import torch
38+
from PIL import Image
39+
import requests
5340

54-
image1 = load_image("https://raw.githubusercontent.com/magicleap/SuperGluePretrainedNetwork/refs/heads/master/assets/phototourism_sample_images/united_states_capitol_98169888_3347710852.jpg")
55-
image2 = load_image("https://raw.githubusercontent.com/magicleap/SuperGluePretrainedNetwork/refs/heads/master/assets/phototourism_sample_images/united_states_capitol_26757027_6717084061.jpg")
41+
url_image1 = "https://raw.githubusercontent.com/magicleap/SuperGluePretrainedNetwork/refs/heads/master/assets/phototourism_sample_images/united_states_capitol_98169888_3347710852.jpg"
42+
image1 = Image.open(requests.get(url_image1, stream=True).raw)
43+
url_image2 = "https://raw.githubusercontent.com/magicleap/SuperGluePretrainedNetwork/refs/heads/master/assets/phototourism_sample_images/united_states_capitol_26757027_6717084061.jpg"
44+
image2 = Image.open(requests.get(url_image2, stream=True).raw)
5645

5746
images = [image1, image2]
5847

59-
processor = AutoImageProcessor.from_pretrained("stevenbucaille/efficientloftr")
60-
model = AutoModelForKeypointMatching.from_pretrained("stevenbucaille/efficientloftr")
48+
processor = AutoImageProcessor.from_pretrained("zju-community/efficientloftr")
49+
model = AutoModelForKeypointMatching.from_pretrained("zju-community/efficientloftr")
6150

6251
inputs = processor(images, return_tensors="pt")
6352
with torch.no_grad():
6453
outputs = model(**inputs)
65-
```
6654

67-
You can use the `post_process_keypoint_matching` method from the `ImageProcessor` to get the keypoints and matches in a more readable format:
68-
69-
```python
55+
# Post-process to get keypoints and matches
7056
image_sizes = [[(image.height, image.width) for image in images]]
71-
outputs = processor.post_process_keypoint_matching(outputs, image_sizes, threshold=0.2)
72-
for i, output in enumerate(outputs):
73-
print("For the image pair", i)
74-
for keypoint0, keypoint1, matching_score in zip(
75-
output["keypoints0"], output["keypoints1"], output["matching_scores"]
76-
):
77-
print(
78-
f"Keypoint at coordinate {keypoint0.numpy()} in the first image matches with keypoint at coordinate {keypoint1.numpy()} in the second image with a score of {matching_score}."
79-
)
57+
processed_outputs = processor.post_process_keypoint_matching(outputs, image_sizes, threshold=0.2)
8058
```
8159

82-
From the post processed outputs, you can visualize the matches between the two images using the following code:
83-
```python
84-
images_with_matching = processor.visualize_keypoint_matching(images, outputs)
85-
```
60+
</hfoption>
61+
</hfoptions>
62+
63+
## Notes
64+
65+
- EfficientLoFTR is designed for efficiency while maintaining high accuracy. It uses an aggregated attention mechanism with adaptive token selection to reduce computational overhead compared to the original LoFTR.
66+
67+
```py
68+
from transformers import AutoImageProcessor, AutoModelForKeypointMatching
69+
import torch
70+
from PIL import Image
71+
import requests
72+
73+
processor = AutoImageProcessor.from_pretrained("zju-community/efficientloftr")
74+
model = AutoModelForKeypointMatching.from_pretrained("zju-community/efficientloftr")
75+
76+
# EfficientLoFTR requires pairs of images
77+
images = [image1, image2]
78+
inputs = processor(images, return_tensors="pt")
79+
outputs = model(**inputs)
80+
81+
# Extract matching information
82+
keypoints = outputs.keypoints # Keypoints in both images
83+
matches = outputs.matches # Matching indices
84+
matching_scores = outputs.matching_scores # Confidence scores
85+
```
86+
87+
- The model produces semi-dense matches, offering a good balance between the density of matches and computational efficiency. It excels in handling large viewpoint changes and texture-poor scenarios.
88+
89+
- For better visualization and analysis, use the [`EfficientLoFTRImageProcessor.post_process_keypoint_matching`] method to get matches in a more readable format.
90+
91+
```py
92+
# Process outputs for visualization
93+
image_sizes = [[(image.height, image.width) for image in images]]
94+
processed_outputs = processor.post_process_keypoint_matching(outputs, image_sizes, threshold=0.2)
95+
96+
for i, output in enumerate(processed_outputs):
97+
print(f"For the image pair {i}")
98+
for keypoint0, keypoint1, matching_score in zip(
99+
output["keypoints0"], output["keypoints1"], output["matching_scores"]
100+
):
101+
print(f"Keypoint at {keypoint0.numpy()} matches with keypoint at {keypoint1.numpy()} with score {matching_score}")
102+
```
103+
104+
- Visualize the matches between the images using the built-in plotting functionality.
105+
106+
```py
107+
# Easy visualization using the built-in plotting method
108+
visualized_images = processor.visualize_keypoint_matching(images, processed_outputs)
109+
```
110+
111+
- EfficientLoFTR uses a novel two-stage correlation layer that achieves accurate subpixel correspondences, improving upon the original LoFTR's fine correlation module.
112+
113+
<div class="flex justify-center">
114+
<img src="https://cdn-uploads.huggingface.co/production/uploads/632885ba1558dac67c440aa8/2nJZQlFToCYp_iLurvcZ4.png">
115+
</div>
86116

87-
![image/png](https://cdn-uploads.huggingface.co/production/uploads/632885ba1558dac67c440aa8/2nJZQlFToCYp_iLurvcZ4.png)
117+
## Resources
88118

89-
This model was contributed by [stevenbucaille](https://huggingface.co/stevenbucaille).
90-
The original code can be found [here](https://github.com/zju3dv/EfficientLoFTR).
119+
- Refer to the [original EfficientLoFTR repository](https://github.com/zju3dv/EfficientLoFTR) for more examples and implementation details.
120+
- [EfficientLoFTR project page](https://zju3dv.github.io/efficientloftr/) with interactive demos and additional information.
91121

92122
## EfficientLoFTRConfig
93123

@@ -101,6 +131,8 @@ The original code can be found [here](https://github.com/zju3dv/EfficientLoFTR).
101131
- post_process_keypoint_matching
102132
- visualize_keypoint_matching
103133

134+
<frameworkcontent>
135+
<pt>
104136
## EfficientLoFTRModel
105137

106138
[[autodoc]] EfficientLoFTRModel
@@ -111,4 +143,7 @@ The original code can be found [here](https://github.com/zju3dv/EfficientLoFTR).
111143

112144
[[autodoc]] EfficientLoFTRForKeypointMatching
113145

114-
- forward
146+
- forward
147+
148+
</pt>
149+
</frameworkcontent>

0 commit comments

Comments
 (0)