diff --git a/README.md b/README.md index f62f8a0..49ba3d9 100644 --- a/README.md +++ b/README.md @@ -11,18 +11,36 @@

+ + We propose pix2pix-zero, a diffusion-based image-to-image approach that allows users to specify the edit direction on-the-fly (e.g., cat to dog). Our method can directly use pre-trained [Stable Diffusion](https://github.com/CompVis/stable-diffusion), for editing real and synthetic images while preserving the input image's structure. Our method is training-free and prompt-free, as it requires neither manual text prompting for each input image nor costly fine-tuning for each task. ## Results All our results are based on [stable-diffusion-v1-4](https://github.com/CompVis/stable-diffusion) model. Please the website for more results. + + + +
+

+ +

+
+ +## Real Image Editing

- +

+## Synthetic Image Editing +
+

+ +

+
## Method Details diff --git a/assets/results_real.jpeg b/assets/results_real.jpeg new file mode 100644 index 0000000..341afb6 Binary files /dev/null and b/assets/results_real.jpeg differ diff --git a/assets/results_syn.jpg b/assets/results_syn.jpg new file mode 100644 index 0000000..979c2e0 Binary files /dev/null and b/assets/results_syn.jpg differ