Majesty Diffusion are implementations of text-to-image diffusion models with a royal touch 👸
Access our Majestic Guide (under construction), join our community on Discord or reach out via @multimodalart on Twitter). Also share your settings with us
Current implementations:
A Dango233 and apolinario (@multimodalart) Colab notebook implementing CompVis' Latent Diffusion. Contribute to our settings library on Hugging Face!
v1.2
- Added Dango233 CLIP Guidance
- Added Dango233 magical new step and upscaling scheduling
- Added Dango233 cuts, augs and attributes scheduling
- Added Dango233 mag and clamp settings
- Added Dango233 linear ETA scheduling
- Added Dango233 negative prompts for Latent Diffusion Guidance
- Added Jack000 GLID-3 XL watermark free fine-tuned model
- Added dmarx Multi-Modal-Comparators for CLIP and CLIP-like models
- Added open_clip gradient checkpointing
- Added crowsonkb aesthetic models
- Added LAION-AI aesthetic predictor embeddings
- Added Dango233 inpainting mode
- Added apolinario (@multimodalart) savable settings and setting library (including
colab-free-default
,dango233-princesses
,the-other-zippy
andmakaitrad
shared settings. Share yours with us too with a pull request!
v1.3
- Better Upscaler (learn how to use it on our [Majestic Guide](https://multimodal.art/majesty-diffusion))v1.4 & 1.5 & 1.6
v1.4
- Added Dango233 Customised Dynamic Thresholding
- Added open_clip ViT-L/14 LAION-400M trained
- Fix CLOOB perceptor from MMC
- Removes latent upscaler (was broken), adds RGB upscaler
v1.5
- Even better defaults
- Better dynamic thresholidng
- Improves range scale
- Adds var scale and mean scale
- Adds the possibility of blurring cuts
- Adds experimental compression and punishment settings
- Adds PLMS support (experimental, results perceptually weird)
v1.6
- Adds LAION
ongo
(finetuned in artworks) anderlich
(finetuned for making logos) models - Adds noising and scaling during the advanced schedulign phases
- Adds ViT-L conditioning downstream to the Latent Diffusion unet process
- Small tweaks on dynamic thresholding
- Fixes linear ETA
A Dango233 and apolinario (@multimodalart) Colab notebook implementing crowsonkb's V-Objective Diffusion, with the following changes:
- Added Dango233 parallel multi-model diffusion (e.g.: run
cc12m_1
andyfcc_2
at the same time - with or without lerping) - Added Dango233 cuts, augs and attributes scheduling
- Added Dango233 mag and clamp settings
- Added apolinario (@multimodalart) ETA scheduling
- Added nshepperd v-diffusion imagenet512 and danbooru models
- Added dmarx Multi-Modal-Comparators
- Added crowsonkb AVA and Simulacra bot aesthetic models
- Added LAION-AI aesthetic pre-calculated embeddings
- Added open_clip gradient checkpointing
- Added Dango233 inpainting mode
- Added apolinario (@multimodalart) "internal upscaling" (upscales the output with
yfcc_2
oropenimages
) - Added apolinario (@multimodalart) savable settings and setting library (including
defaults
,disco-diffusion-defaults
default settings). Share yours with us too with a pull request!
- Figure out better defaults and add more settings to the settings library (contribute with a PR!)
- Add all notebooks to a single pipeline where on model can be the output of the other (similar to Centipede Diffusion)
- Add all notebooks to the MindsEye UI
- Modularise everything
- Create a command line version
- Add an inpainting UI
- Improve performance, both in speed and VRAM consumption
- More technical issues will be listed on https://github.com/multimodalart/majesty-diffusion/issues
Some functions and methods are from various code masters - including but not limited to advadnoun, crowsonkb, nshepperd, russelldc, Dango233 and many others