Releases: d8ahazard/sd_dreambooth_extension
1.1.0 - SDXL, UI Redesign, Fixes
Definitely not dead, I just don't have all the time in the world to work on this anymore. But, good things are coming. ;)
While SDXL support has been in the main branch for a while now, there were still some issues that needed fixing, and hopefully, this addresses most of them.
It also brings a much-needed overhaul to the UI, including options to hide/show advanced settings, and a more streamlined and easy-to-follow "workflow".
Additionally, I've taken steps to ensure that installation goes more smoothly than it has in the past.
I'm sure there are still many issues that need resolution, but this should be a good start in that direction.
1.0.14 - Adaptive Optimizers and more!
June 3, 2023 Patch Notes (#1251)
- 🎨 New Adaptive optimizers | (see below)
- 📷 Image processing improvements | Added support for transparent datasets images and parsing orientation from exiff
- 🧪 New experimental settings:
- ToMe (Token Merging) - increases training performance at the cost of reduced quality
- Disable Class Matching - Disabled matching rules when collecting the class dataset (useful for pre-made class sets)
- Shared Model Source for LoRA - Enabling will reuse extracted source checkpoints
- TENC controls for weight decay and gradient clip
- 🐛 Bug fixes
- Some new settings
Dadaptation Optimizers!
This release includes adaptive optimizers from Facebook. These new optimizers are much easier to configure and give similar quality. However, they may only work with LoRA on consumer cards due to VRAM limitations.
Recommended settings
The adaptation optimizers have different LR and WD ranges from Torch/8Bit AdamW. Hovering over the optimizers field will show a tooltip with recommendations. Here's an example of a good setup:
✅ Lora
✅ Extended Lora
~80
Epochs (adjust for the amount of training you want)
~0.3
for both Learning Rates
0
Warmup
AdamW Dadaptation
Optimizer
❌ EMA
0
TENC ratio
fp16
Mixed Precision
Default
Memory attention
0
Weight Decay
Set your Batch Size. 6
is a good starting value. Higher values mean faster training, but it also increases VRAM usage, so you'll OOM if it's too high (also speed will slow down if your VRAM usage is at the limit).
On the Saving tab, set both Lora Rank sliders to 32
(it feels like this quality is a bit better than lower values, I haven't noticed any improvements going above 32)
I'm pretty sure you could use this setup with 8gb. I'm pretty sure this setup is pretty close to optimal for finetuning atm. If you're training a new token, then you might also want to train TENC, which you can do separately if you are low on VRAM. If you need to reduce VRAM usage even further, then you could use xformers, or disable extended lora, or even reduce the resolution slider.
Other features
Portable Concept Lists
Automatically rebuild path from where folder JSON is reside and
concat
path that defined, allow user share dataset without needing to edit the JSON. No need to use absolute/rooted path oninstance_data_dir
.
see #1212 for more details.
Other issues
The extension still has some issues with the latest versions of the A1111. If you experience issues, consider using an older commit (such as a9eab236) that uses Gradio 3.16.2.
Some users have also reported issues with Torch 2. We are considering that the "standard" torch version now but consider reverting to Torch 1 if you experience torch issues.
1.0.13 - Fixes and prepping for Torch 2
Mar 15, 2023 Patch Notes (#1070)
- 🐛 Bug fixes
- 🧹 UI Cleanup | The txt2img checkbox has been replaced with dropdowns on the top of the Generate tab. + other tweaks.
Torch 2!
This extension is compatible with Torch 2. We may add a startup command in the future to automatically install it, but for now, it must be manually installed. To do so, update your A1111 project and your extension to the latest versions, then run:
Windows and Linux
cd venv/Scripts
activate
pip install --force-reinstall torch torchvision --index-url https://download.pytorch.org/whl/cu118
pip install --force-reinstall --no-deps --pre xformers
MacOS
cd venv/Scripts
activate
pip install --force-reinstall --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cpu
pip install --force-reinstall --no-deps --pre xformers
Torch troubleshooting
Recently, people have been reporting issues with extracting models or dreambooth not running properly. The first thing to check is your torch install. During project startup, you should see a section in your console that looks like:
[+] torch version 1.13.1+cu117 installed.
[+] torchvision version 0.14.1+cu117 installed.
...
This extension should not change your torch install. If torch is installed during Checking Dreambooth requirements...
, you should ensure that your A1111 project is updated. If your A1111 project is already updated, then you probably have another extension installed that is messing with your torch.
1.0.12 - Fixes and some Lora love
Mar 06, 2023 Patch Notes
-
🙆♀️ Simplifying LoRA | newly created
.safetensors
files can now be used like normal LoRA files. Hooray! See the Details section at the bottom for more info. -
🐛 Bug fixes
-
🗣️ Discord | https://discord.gg/q8dtpfRD5w no actual changes here, just mentioning it to make it easier to find
Feb 28, 2023 Patch Notes
-
🌱 Seeding | repeat training should produce exact^ model outputs (^similar, if using xformers and/or 8bit adam).
-
🦁 New optimizer | to use, set Optimizer to Lion on the Settings tab. recommended to reduce LR to 1/10 of Adam's LRs with Lion.
-
🔊 Offset noise | recommended value: 0.05-0.1, may improve high/low lighting output pictures.
-
📈 Improved graphs
-
⏱️ DEIS Scheduler | new and improved schedule. To use for class pic generation uncheck Generate Classification Images Using txt2img on the Settings tab. To use as training scheduler, check Use DEIS for noise scheduler on the Testing tab
-
🙆♀️ New LoRA features | added LoRA Dropout & Conv2d support and custom scaling of ranks for high quality. See cloneofsimo/lora#133 for more info.
-
🥼 Quality of Life improvements | Add version check notifications to the UI, along with changelogs.
-
🔦 Removed Flash Attention | In preparation for the release of torch 2, Flash attention has been removed as it's no longer supported by diffusers. When Torch2 is officially released, xformers will be removed as well, as it is supported natively in Torch2!
-
🐛 Bug fixes
Details
Using LoRA models in A1111 web UI
With the new release, the dreambooth extension can now generate LoRA safetensors files that are compatible with the built in extra networks feature in the web UI. This will be enabled by default for new LoRA models and can be changed in the “Saving” tab with the “Generate LoRA weights for extra networks.”
If you previously used LoRA from dreambooth, the following steps are recommended with the update:
- move or remove the old .pt models from
stable-diffusion-webui\models\LoRA
directory. For example, rename that directoryLoRA.bak
- new trainings will create the
.pt
and_txt.pt
files understable-diffusion-webui\models\dreambooth\{model_name}\LoRAs
. If you need to create a checkpoint from a previous run, you will need to copy the appropriate.pt
files into this directory.
Within the webui, you should now automatically see the generated LoRA files in the native extra networks feature, or you can directly add it to your prompts using the <LoRA:modelname_steps:1>
syntax.
Supporting extended LoRA
The native extra networks in a1111 does not support extended LoRA. However, installing the locon extension: https://github.com/KohakuBlueleaf/a1111-sd-webui-locon will add extended LoRA support (this may get native support in the future).
Other news
This extension updates the GitPython dependency to a newer version that patches a security issue. This may break updating extensions until a code fix is approved by A1111 for the base project. See AUTOMATIC1111/stable-diffusion-webui#8118 for more details.
Upcoming changes
- Improving the documentation / wiki
- Cleaning up some of the UI
1.0.11 - Rock and roll
Feb 28, 2023 Patch Notes
-
🌱 Seeding | repeat training should produce exact^ model outputs (^similar, if using xformers and/or 8bit adam).
-
🦁 New optimizer | to use, set Optimizer to Lion on the Settings tab. recommended to reduce LR to 1/10 of Adam's LRs with Lion.
-
🔊 Offset noise | recommended value: 0.05-0.1, may improve high/low lighting output pictures.
-
📈 Improved graphs
-
⏱️ DEIS Scheduler | new and improved schedule. To use for class pic generation uncheck Generate Classification Images Using txt2img on the Settings tab. To use as training scheduler, check Use DEIS for noise scheduler on the Testing tab
-
🙆♀️ New Lora features | added Lora Dropout & Conv2d support and custom scaling of ranks for high quality. See cloneofsimo/lora#133 for more info.
-
🥼 Quality of Life improvements | Add version check notifications to the UI, along with changelogs.
-
🔦 Removed Flash Attention | In preparation for the release of torch 2, Flash attention has been removed as it's no longer supported by diffusers. When Torch2 is officially released, xformers will be removed as well, as it is supported natively in Torch2!
-
🐛 Bug fixes