Skip to content

Commit 1082361

Browse files
authored
Depth Anything: update conversion script for V2 (huggingface#31522)
* Depth Anything: update conversion script for V2 * Update docs * Style * Revert "Update docs" This reverts commit be0ca47. * Add docs for depth anything v2 * Add depth_anything_v2 to MODEL_NAMES_MAPPING Done similarly to Flan-T5: https://github.com/huggingface/transformers/pull/19892/files * Add tip in original docs
1 parent a8fa6fb commit 1082361

File tree

5 files changed

+165
-16
lines changed

5 files changed

+165
-16
lines changed

docs/source/en/_toctree.yml

+2
Original file line numberDiff line numberDiff line change
@@ -581,6 +581,8 @@
581581
title: DeiT
582582
- local: model_doc/depth_anything
583583
title: Depth Anything
584+
- local: model_doc/depth_anything_v2
585+
title: Depth Anything V2
584586
- local: model_doc/deta
585587
title: DETA
586588
- local: model_doc/detr

docs/source/en/model_doc/depth_anything.md

+6
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,12 @@ rendered properly in your Markdown viewer.
2020

2121
The Depth Anything model was proposed in [Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data](https://arxiv.org/abs/2401.10891) by Lihe Yang, Bingyi Kang, Zilong Huang, Xiaogang Xu, Jiashi Feng, Hengshuang Zhao. Depth Anything is based on the [DPT](dpt) architecture, trained on ~62 million images, obtaining state-of-the-art results for both relative and absolute depth estimation.
2222

23+
<Tip>
24+
25+
[Depth Anything V2](depth_anything_v2) was released in June 2024. It uses the same architecture as Depth Anything and therefore it is compatible with all code examples and existing workflows. However, it leverages synthetic data and a larger capacity teacher model to achieve much finer and robust depth predictions.
26+
27+
</Tip>
28+
2329
The abstract from the paper is the following:
2430

2531
*This work presents Depth Anything, a highly practical solution for robust monocular depth estimation. Without pursuing novel technical modules, we aim to build a simple yet powerful foundation model dealing with any images under any circumstances. To this end, we scale up the dataset by designing a data engine to collect and automatically annotate large-scale unlabeled data (~62M), which significantly enlarges the data coverage and thus is able to reduce the generalization error. We investigate two simple yet effective strategies that make data scaling-up promising. First, a more challenging optimization target is created by leveraging data augmentation tools. It compels the model to actively seek extra visual knowledge and acquire robust representations. Second, an auxiliary supervision is developed to enforce the model to inherit rich semantic priors from pre-trained encoders. We evaluate its zero-shot capabilities extensively, including six public datasets and randomly captured photos. It demonstrates impressive generalization ability. Further, through fine-tuning it with metric depth information from NYUv2 and KITTI, new SOTAs are set. Our better depth model also results in a better depth-conditioned ControlNet.*
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,115 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
12+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13+
rendered properly in your Markdown viewer.
14+
15+
-->
16+
17+
# Depth Anything V2
18+
19+
## Overview
20+
21+
Depth Anything V2 was introduced in [the paper of the same name](https://arxiv.org/abs/2406.09414) by Lihe Yang et al. It uses the same architecture as the original [Depth Anything model](depth_anything), but uses synthetic data and a larger capacity teacher model to achieve much finer and robust depth predictions.
22+
23+
The abstract from the paper is the following:
24+
25+
*This work presents Depth Anything V2. Without pursuing fancy techniques, we aim to reveal crucial findings to pave the way towards building a powerful monocular depth estimation model. Notably, compared with V1, this version produces much finer and more robust depth predictions through three key practices: 1) replacing all labeled real images with synthetic images, 2) scaling up the capacity of our teacher model, and 3) teaching student models via the bridge of large-scale pseudo-labeled real images. Compared with the latest models built on Stable Diffusion, our models are significantly more efficient (more than 10x faster) and more accurate. We offer models of different scales (ranging from 25M to 1.3B params) to support extensive scenarios. Benefiting from their strong generalization capability, we fine-tune them with metric depth labels to obtain our metric depth models. In addition to our models, considering the limited diversity and frequent noise in current test sets, we construct a versatile evaluation benchmark with precise annotations and diverse scenes to facilitate future research.*
26+
27+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/depth_anything_overview.jpg"
28+
alt="drawing" width="600"/>
29+
30+
<small> Depth Anything overview. Taken from the <a href="https://arxiv.org/abs/2401.10891">original paper</a>.</small>
31+
32+
The Depth Anything models were contributed by [nielsr](https://huggingface.co/nielsr).
33+
The original code can be found [here](https://github.com/DepthAnything/Depth-Anything-V2).
34+
35+
## Usage example
36+
37+
There are 2 main ways to use Depth Anything V2: either using the pipeline API, which abstracts away all the complexity for you, or by using the `DepthAnythingForDepthEstimation` class yourself.
38+
39+
### Pipeline API
40+
41+
The pipeline allows to use the model in a few lines of code:
42+
43+
```python
44+
>>> from transformers import pipeline
45+
>>> from PIL import Image
46+
>>> import requests
47+
48+
>>> # load pipe
49+
>>> pipe = pipeline(task="depth-estimation", model="depth-anything/Depth-Anything-V2-Small-hf")
50+
51+
>>> # load image
52+
>>> url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
53+
>>> image = Image.open(requests.get(url, stream=True).raw)
54+
55+
>>> # inference
56+
>>> depth = pipe(image)["depth"]
57+
```
58+
59+
### Using the model yourself
60+
61+
If you want to do the pre- and post-processing yourself, here's how to do that:
62+
63+
```python
64+
>>> from transformers import AutoImageProcessor, AutoModelForDepthEstimation
65+
>>> import torch
66+
>>> import numpy as np
67+
>>> from PIL import Image
68+
>>> import requests
69+
70+
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg"
71+
>>> image = Image.open(requests.get(url, stream=True).raw)
72+
73+
>>> image_processor = AutoImageProcessor.from_pretrained("depth-anything/Depth-Anything-V2-Small-hf")
74+
>>> model = AutoModelForDepthEstimation.from_pretrained("depth-anything/Depth-Anything-V2-Small-hf")
75+
76+
>>> # prepare image for the model
77+
>>> inputs = image_processor(images=image, return_tensors="pt")
78+
79+
>>> with torch.no_grad():
80+
... outputs = model(**inputs)
81+
... predicted_depth = outputs.predicted_depth
82+
83+
>>> # interpolate to original size
84+
>>> prediction = torch.nn.functional.interpolate(
85+
... predicted_depth.unsqueeze(1),
86+
... size=image.size[::-1],
87+
... mode="bicubic",
88+
... align_corners=False,
89+
... )
90+
91+
>>> # visualize the prediction
92+
>>> output = prediction.squeeze().cpu().numpy()
93+
>>> formatted = (output * 255 / np.max(output)).astype("uint8")
94+
>>> depth = Image.fromarray(formatted)
95+
```
96+
97+
## Resources
98+
99+
A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Depth Anything.
100+
101+
- [Monocular depth estimation task guide](../tasks/depth_estimation)
102+
- [Depth Anything V2 demo](https://huggingface.co/spaces/depth-anything/Depth-Anything-V2).
103+
- A notebook showcasing inference with [`DepthAnythingForDepthEstimation`] can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/blob/master/Depth%20Anything/Predicting_depth_in_an_image_with_Depth_Anything.ipynb). 🌎
104+
- [Core ML conversion of the `small` variant for use on Apple Silicon](https://huggingface.co/apple/coreml-depth-anything-v2-small).
105+
106+
If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.
107+
108+
## DepthAnythingConfig
109+
110+
[[autodoc]] DepthAnythingConfig
111+
112+
## DepthAnythingForDepthEstimation
113+
114+
[[autodoc]] DepthAnythingForDepthEstimation
115+
- forward

src/transformers/models/auto/configuration_auto.py

+1
Original file line numberDiff line numberDiff line change
@@ -356,6 +356,7 @@
356356
("deit", "DeiT"),
357357
("deplot", "DePlot"),
358358
("depth_anything", "Depth Anything"),
359+
("depth_anything_v2", "Depth Anything V2"),
359360
("deta", "DETA"),
360361
("detr", "DETR"),
361362
("dialogpt", "DialoGPT"),

src/transformers/models/depth_anything/convert_depth_anything_to_hf.py

+41-16
Original file line numberDiff line numberDiff line change
@@ -33,25 +33,28 @@
3333

3434
def get_dpt_config(model_name):
3535
if "small" in model_name:
36+
out_indices = [3, 6, 9, 12] if "v2" in model_name else [9, 10, 11, 12]
3637
backbone_config = Dinov2Config.from_pretrained(
37-
"facebook/dinov2-small", out_indices=[9, 10, 11, 12], apply_layernorm=True, reshape_hidden_states=False
38+
"facebook/dinov2-small", out_indices=out_indices, apply_layernorm=True, reshape_hidden_states=False
3839
)
3940
fusion_hidden_size = 64
4041
neck_hidden_sizes = [48, 96, 192, 384]
4142
elif "base" in model_name:
43+
out_indices = [3, 6, 9, 12] if "v2" in model_name else [9, 10, 11, 12]
4244
backbone_config = Dinov2Config.from_pretrained(
43-
"facebook/dinov2-base", out_indices=[9, 10, 11, 12], apply_layernorm=True, reshape_hidden_states=False
45+
"facebook/dinov2-base", out_indices=out_indices, apply_layernorm=True, reshape_hidden_states=False
4446
)
4547
fusion_hidden_size = 128
4648
neck_hidden_sizes = [96, 192, 384, 768]
4749
elif "large" in model_name:
50+
out_indices = [5, 12, 18, 24] if "v2" in model_name else [21, 22, 23, 24]
4851
backbone_config = Dinov2Config.from_pretrained(
49-
"facebook/dinov2-large", out_indices=[21, 22, 23, 24], apply_layernorm=True, reshape_hidden_states=False
52+
"facebook/dinov2-large", out_indices=out_indices, apply_layernorm=True, reshape_hidden_states=False
5053
)
5154
fusion_hidden_size = 256
5255
neck_hidden_sizes = [256, 512, 1024, 1024]
5356
else:
54-
raise NotImplementedError("To do")
57+
raise NotImplementedError(f"Model not supported: {model_name}")
5558

5659
config = DepthAnythingConfig(
5760
reassemble_hidden_size=backbone_config.hidden_size,
@@ -169,9 +172,13 @@ def prepare_img():
169172

170173

171174
name_to_checkpoint = {
172-
"depth-anything-small": "depth_anything_vits14.pth",
173-
"depth-anything-base": "depth_anything_vitb14.pth",
174-
"depth-anything-large": "depth_anything_vitl14.pth",
175+
"depth-anything-small": "pytorch_model.bin",
176+
"depth-anything-base": "pytorch_model.bin",
177+
"depth-anything-large": "pytorch_model.bin",
178+
"depth-anything-v2-small": "depth_anything_v2_vits.pth",
179+
"depth-anything-v2-base": "depth_anything_v2_vitb.pth",
180+
"depth-anything-v2-large": "depth_anything_v2_vitl.pth",
181+
# v2-giant pending
175182
}
176183

177184

@@ -184,17 +191,23 @@ def convert_dpt_checkpoint(model_name, pytorch_dump_folder_path, push_to_hub, ve
184191
# define DPT configuration
185192
config = get_dpt_config(model_name)
186193

187-
model_name_to_filename = {
188-
"depth-anything-small": "depth_anything_vits14.pth",
189-
"depth-anything-base": "depth_anything_vitb14.pth",
190-
"depth-anything-large": "depth_anything_vitl14.pth",
194+
model_name_to_repo = {
195+
"depth-anything-small": "LiheYoung/depth_anything_vits14",
196+
"depth-anything-base": "LiheYoung/depth_anything_vitb14",
197+
"depth-anything-large": "LiheYoung/depth_anything_vitl14",
198+
"depth-anything-v2-small": "depth-anything/Depth-Anything-V2-Small",
199+
"depth-anything-v2-base": "depth-anything/Depth-Anything-V2-Base",
200+
"depth-anything-v2-large": "depth-anything/Depth-Anything-V2-Large",
191201
}
192202

193203
# load original state_dict
194-
filename = model_name_to_filename[model_name]
204+
repo_id = model_name_to_repo[model_name]
205+
filename = name_to_checkpoint[model_name]
195206
filepath = hf_hub_download(
196-
repo_id="LiheYoung/Depth-Anything", filename=f"checkpoints/{filename}", repo_type="space"
207+
repo_id=repo_id,
208+
filename=f"{filename}",
197209
)
210+
198211
state_dict = torch.load(filepath, map_location="cpu")
199212
# rename keys
200213
rename_keys = create_rename_keys(config)
@@ -247,11 +260,23 @@ def convert_dpt_checkpoint(model_name, pytorch_dump_folder_path, push_to_hub, ve
247260
expected_slice = torch.tensor(
248261
[[87.9968, 87.7493, 88.2704], [87.1927, 87.6611, 87.3640], [86.7789, 86.9469, 86.7991]]
249262
)
263+
elif model_name == "depth-anything-v2-small":
264+
expected_slice = torch.tensor(
265+
[[2.6751, 2.6211, 2.6571], [2.5820, 2.6138, 2.6271], [2.6160, 2.6141, 2.6306]]
266+
)
267+
elif model_name == "depth-anything-v2-base":
268+
expected_slice = torch.tensor(
269+
[[4.3576, 4.3723, 4.3908], [4.3231, 4.3146, 4.3611], [4.3016, 4.3170, 4.3121]]
270+
)
271+
elif model_name == "depth-anything-v2-large":
272+
expected_slice = torch.tensor(
273+
[[162.2751, 161.8504, 162.8788], [160.3138, 160.8050, 161.9835], [159.3812, 159.9884, 160.0768]]
274+
)
250275
else:
251276
raise ValueError("Not supported")
252277

253278
assert predicted_depth.shape == torch.Size(expected_shape)
254-
assert torch.allclose(predicted_depth[0, :3, :3], expected_slice, atol=1e-6)
279+
assert torch.allclose(predicted_depth[0, :3, :3], expected_slice, atol=1e-4)
255280
print("Looks ok!")
256281

257282
if pytorch_dump_folder_path is not None:
@@ -262,8 +287,8 @@ def convert_dpt_checkpoint(model_name, pytorch_dump_folder_path, push_to_hub, ve
262287

263288
if push_to_hub:
264289
print("Pushing model and processor to hub...")
265-
model.push_to_hub(repo_id=f"LiheYoung/{model_name}-hf")
266-
processor.push_to_hub(repo_id=f"LiheYoung/{model_name}-hf")
290+
model.push_to_hub(repo_id=f"{model_name.title()}-hf")
291+
processor.push_to_hub(repo_id=f"{model_name.title()}-hf")
267292

268293

269294
if __name__ == "__main__":

0 commit comments

Comments
 (0)