Clean Diffusion is Latent Diffusion Model made of public domain images (CC-0).
You would download on Hugging Face. If you are Japanese, I recommend Clean Diffusion For Japanese (TBA) instead of Clean Diffusion (For Global). The model is more powerful than this global version.
With great power comes great responsibility.
If you CANNOT UNDERSTAND THESE WORDS, I recommend that YOU SHOULD NOT USE ALL OF DIFFUSION MODELS what have great powers.
You would be able to use Clean Diffusion by the following code soon.
from diffusers import StableDiffusionPipeline
import torch
model_id = "alfredplpl/clean-diffusion"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16, revision="fp16")
pipe = pipe.to("cuda")
prompt = "A girl by Mucha."
image = pipe(prompt).images[0]
image.save("girl.png")
Clean Diffusion is less powerful than Stable Diffusion. Therefore, I recommend to tune Clean Diffusion like Stable Diffusion because Clean Diffusion of the network architecture and Stable Diffusion of the network architecture are same. And I repeat the words before I explain the detail.
With great power comes great responsibility.
Please consider the words before you tune Clean Diffusion.
TBA on Colab.
TBA on Colab.
TBA on Colab.
TBA
I proof that clean diffusion is clean by following explanation.
TBA
Clean Diffusion is legal and ethical.
Clean Diffusion is MADE IN JAPAN. Therefore, Clean Diffusion is subject to Japanese copyright laws.
TBA
TBA
TBA
- ArtBench (public domain is True)
- Popeye the Sailor Meets Sindbad the Sailor
I would like to the all training raw images because these images are public domain. However, these images are huge (70GB+). Therefore, I have opened the tiny version like this.
TBA
TBA
Please read ldm_files folder. I refer to stable-diffusion while I create these codes and configs.
TBA
Standing on the shoulders of giants
@misc{rombach2021highresolution,
title={High-Resolution Image Synthesis with Latent Diffusion Models},
author={Robin Rombach and Andreas Blattmann and Dominik Lorenz and Patrick Esser and Björn Ommer},
year={2021},
eprint={2112.10752},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@article{liao2022artbench,
title={The ArtBench Dataset: Benchmarking Generative Models with Artworks},
author={Liao, Peiyuan and Li, Xiuyu and Liu, Xihui and Keutzer, Kurt},
journal={arXiv preprint arXiv:2206.11404},
year={2022}
}