Skip to content

Commit 37c8248

Browse files
authored
Update evaluation.mdx (huggingface#2862)
Fix typos
1 parent 1384546 commit 37c8248

File tree

1 file changed

+4
-4
lines changed

1 file changed

+4
-4
lines changed

docs/source/en/conceptual/evaluation.mdx

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -310,7 +310,7 @@ for idx in range(len(dataset)):
310310
edited_images.append(edited_image)
311311
```
312312

313-
To measure the directional similarity, we first load CLIP's image and text encoders.
313+
To measure the directional similarity, we first load CLIP's image and text encoders:
314314

315315
```python
316316
from transformers import (
@@ -329,7 +329,7 @@ image_encoder = CLIPVisionModelWithProjection.from_pretrained(clip_id).to(device
329329

330330
Notice that we are using a particular CLIP checkpoint, i.e., `openai/clip-vit-large-patch14`. This is because the Stable Diffusion pre-training was performed with this CLIP variant. For more details, refer to the [documentation](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/pix2pix#diffusers.StableDiffusionInstructPix2PixPipeline.text_encoder).
331331

332-
Next, we prepare a PyTorch `nn.module` to compute directional similarity:
332+
Next, we prepare a PyTorch `nn.Module` to compute directional similarity:
333333

334334
```python
335335
import torch.nn as nn
@@ -410,7 +410,7 @@ It should be noted that the `StableDiffusionInstructPix2PixPipeline` exposes t
410410

411411
We can extend the idea of this metric to measure how similar the original image and edited version are. To do that, we can just do `F.cosine_similarity(img_feat_two, img_feat_one)`. For these kinds of edits, we would still want the primary semantics of the images to be preserved as much as possible, i.e., a high similarity score.
412412

413-
We can use these metrics for similar pipelines such as the[`StableDiffusionPix2PixZeroPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/pix2pix_zero#diffusers.StableDiffusionPix2PixZeroPipeline)`.
413+
We can use these metrics for similar pipelines such as the [`StableDiffusionPix2PixZeroPipeline`](https://huggingface.co/docs/diffusers/main/en/api/pipelines/stable_diffusion/pix2pix_zero#diffusers.StableDiffusionPix2PixZeroPipeline).
414414

415415
<Tip>
416416

@@ -550,7 +550,7 @@ FID results tend to be fragile as they depend on a lot of factors:
550550
* The image format (not the same if we start from PNGs vs JPGs).
551551

552552
Keeping that in mind, FID is often most useful when comparing similar runs, but it is
553-
hard to to reproduce paper results unless the authors carefully disclose the FID
553+
hard to reproduce paper results unless the authors carefully disclose the FID
554554
measurement code.
555555

556556
These points apply to other related metrics too, such as KID and IS.

0 commit comments

Comments
 (0)