Skip to content

Commit 72e4b2d

Browse files
miniMaddystevhliu
authored andcommitted
made suggested changes.
1 parent 5222cd9 commit 72e4b2d

File tree

1 file changed

+21
-15
lines changed

1 file changed

+21
-15
lines changed

docs/source/en/model_doc/zoedepth.md

Lines changed: 21 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -21,31 +21,36 @@ rendered properly in your Markdown viewer.
2121
</div>
2222
</div>
2323

24-
# Zoedepth
24+
# ZoeDepth
2525

26-
[Zoedepth](https://huggingface.co/papers/2302.12288) is a metric (also called absolute) depth estimation model to generate depth maps directly from images. It gives absolute metric depth in real-world metres, instead of relative depth. ZoeDepth is pre-trained on 12 datasets using relative depth and fine-tuned on two domains (NYU and KITTI) using metric depth. A lightweight head is used with a novel bin adjustment design called metric bins module for each domain. During inference, each input image is automatically routed to the appropriate head using a latent classifier.
26+
[ZoeDepth](https://huggingface.co/papers/2302.12288) is a depth estimation model that combines the generalization performance of relative depth estimation (how far objects are from each other) and metric depth estimation (precise depth measurement on metric scale) from a single image. It is pre-trained on 12 datasets using relative depth and 2 datasets (NYU Depth v2 and KITTI) for metric accuracy. A lightweight head with a metric bin module for each domain is used, and during inference, it automatically selects the appropriate head for each input image with a latent classifier.
2727

28-
You can find all the original Zoedepth checkpoints under the [Zoedepth](https://huggingface.co/Intel?search=zoedepth) collection.
28+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/zoedepth_architecture_bis.png"
29+
alt="drawing" width="600"/>
2930

30-
The example below demonstrates how to generate text based on an image with [`Pipeline`] or the [`AutoModel`] class.
31+
You can find all the original ZoeDepth checkpoints under the [Intel](https://huggingface.co/Intel?search=zoedepth) organization.
32+
33+
The example below demonstrates how to estimate depth with [`Pipeline`] or the [`AutoModel`] class.
3134

3235
<hfoptions id="usage">
3336
<hfoption id="Pipeline">
3437

3538
```py
39+
import requests
3640
import torch
3741
from transformers import pipeline
3842
from PIL import Image
39-
import requests
4043

44+
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
4145
image = Image.open(requests.get(url, stream=True).raw)
42-
pipe = pipeline(
46+
pipeline = pipeline(
4347
task="depth-estimation",
4448
model="Intel/zoedepth-nyu-kitti",
49+
torch_dtype=torch.float16,
4550
device=0
4651
)
47-
results = pipe(image)
48-
depth = result['depth']
52+
results = pipeline(image)
53+
results["depth"]
4954
```
5055

5156
</hfoption>
@@ -62,17 +67,17 @@ image_processor = AutoImageProcessor.from_pretrained(
6267
)
6368
model = AutoModelForDepthEstimation.from_pretrained(
6469
"Intel/zoedepth-nyu-kitti",
65-
device_map=0
70+
device_map="auto"
6671
)
67-
url = "http://images.cocodataset.org/val2017/000000039769.jpg"
72+
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/pipeline-cat-chonk.jpeg"
6873
image = Image.open(requests.get(url, stream=True).raw)
6974
inputs = image_processor(image, return_tensors="pt").to("cuda")
7075

7176
with torch.no_grad():
7277
outputs = model(inputs)
7378

7479
# interpolate to original size and visualize the prediction
75-
## ZoeDepth dynamically pads the input image. Thus we pass the original image size as argument
80+
## ZoeDepth dynamically pads the input image, so pass the original image size as argument
7681
## to `post_process_depth_estimation` to remove the padding and resize to original dimensions.
7782
post_processed_output = image_processor.post_process_depth_estimation(
7883
outputs,
@@ -82,15 +87,15 @@ post_processed_output = image_processor.post_process_depth_estimation(
8287
predicted_depth = post_processed_output[0]["predicted_depth"]
8388
depth = (predicted_depth - predicted_depth.min()) / (predicted_depth.max() - predicted_depth.min())
8489
depth = depth.detach().cpu().numpy() * 255
85-
depth = Image.fromarray(depth.astype("uint8"))
90+
Image.fromarray(depth.astype("uint8"))
8691
```
8792

8893
</hfoption>
8994
</hfoptions>
9095

9196
## Notes
9297

93-
- In the [original implementation](https://github.com/isl-org/ZoeDepth/blob/edb6daf45458569e24f50250ef1ed08c015f17a7/zoedepth/models/depth_model.py#L131) ZoeDepth model performs inference on both the original and flipped images and averages out the results. The ```post_process_depth_estimation``` function can handle this for us by passing the flipped outputs to the optional ```outputs_flipped``` argument:
98+
- In the [original implementation](https://github.com/isl-org/ZoeDepth/blob/edb6daf45458569e24f50250ef1ed08c015f17a7/zoedepth/models/depth_model.py#L131) ZoeDepth performs inference on both the original and flipped images and averages the results. The `post_process_depth_estimation` function handles this by passing the flipped outputs to the optional `outputs_flipped` argument as shown below.
9499
```py
95100
with torch.no_grad():
96101
outputs = model(pixel_values)
@@ -101,8 +106,9 @@ depth = Image.fromarray(depth.astype("uint8"))
101106
outputs_flipped=outputs_flipped,
102107
)
103108
```
104-
- A demo notebook regarding inference with ZoeDepth models can be found [here](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/ZoeDepth).
105-
109+
110+
## Resources
111+
- Refer to this [notebook](https://github.com/NielsRogge/Transformers-Tutorials/tree/master/ZoeDepth) for an inference example.
106112

107113
## ZoeDepthConfig
108114

0 commit comments

Comments
 (0)