You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+19-4Lines changed: 19 additions & 4 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,7 @@ This repository contains the scripts for:
25
25
26
26
The file does not contain requirements for PyTorch. Because the version of PyTorch depends on the environment, it is not included in the file. Please install PyTorch first according to the environment. See installation instructions below.
27
27
28
-
The scripts are tested with Pytorch 2.1.2. 2.0.1 and 1.12.1 is not tested but should work.
28
+
The scripts are tested with Pytorch 2.1.2. PyTorch 2.2 or later will work. Please install the appropriate version of PyTorch and xformers.
Python 3.10.x, 3.11.x, and 3.12.x will work but not tested.
56
+
55
57
Give unrestricted script access to powershell so venv can work:
56
58
57
59
- Open an administrator powershell window
@@ -78,10 +80,12 @@ accelerate config
78
80
79
81
If `python -m venv` shows only `python`, change `python` to `py`.
80
82
81
-
__Note:__ Now `bitsandbytes==0.43.0`, `prodigyopt==1.0` and `lion-pytorch==0.0.6` are included in the requirements.txt. If you'd like to use the another version, please install it manually.
83
+
Note: Now `bitsandbytes==0.44.0`, `prodigyopt==1.0` and `lion-pytorch==0.0.6` are included in the requirements.txt. If you'd like to use the another version, please install it manually.
82
84
83
85
This installation is for CUDA 11.8. If you use a different version of CUDA, please install the appropriate version of PyTorch and xformers. For example, if you use CUDA 12, please install `pip install torch==2.1.2 torchvision==0.16.2 --index-url https://download.pytorch.org/whl/cu121` and `pip install xformers==0.0.23.post1 --index-url https://download.pytorch.org/whl/cu121`.
84
86
87
+
If you use PyTorch 2.2 or later, please change `torch==2.1.2` and `torchvision==0.16.2` and `xformers==0.0.23.post1` to the appropriate version.
@@ -142,12 +146,18 @@ The majority of scripts is licensed under ASL 2.0 (including codes from Diffuser
142
146
143
147
## Change History
144
148
145
-
### Working in progress
149
+
### Jan 17, 2025 / 2025-01-17 Version 0.9.0
146
150
147
151
-__important__ The dependent libraries are updated. Please see [Upgrade](#upgrade) and update the libraries.
148
152
- bitsandbytes, transformers, accelerate and huggingface_hub are updated.
149
153
- If you encounter any issues, please report them.
150
154
155
+
- The dev branch is merged into main. The documentation is delayed, and I apologize for that. I will gradually improve it.
156
+
- The state just before the merge is released as Version 0.8.8, so please use it if you encounter any issues.
157
+
- The following changes are included.
158
+
159
+
#### Changes
160
+
151
161
- Fixed a bug where the loss weight was incorrect when `--debiased_estimation_loss` was specified with `--v_parameterization`. PR [#1715](https://github.com/kohya-ss/sd-scripts/pull/1715) Thanks to catboxanon! See [the PR](https://github.com/kohya-ss/sd-scripts/pull/1715) for details.
152
162
- Removed the warning when `--v_parameterization` is specified in SDXL and SD1.5. PR [#1717](https://github.com/kohya-ss/sd-scripts/pull/1717)
153
163
@@ -188,7 +198,6 @@ The majority of scripts is licensed under ASL 2.0 (including codes from Diffuser
188
198
- See the [transformers documentation](https://huggingface.co/docs/transformers/v4.44.2/en/main_classes/optimizer_schedules#schedules) for details on each scheduler.
189
199
-`--lr_warmup_steps` and `--lr_decay_steps` can now be specified as a ratio of the number of training steps, not just the step value. Example: `--lr_warmup_steps=0.1` or `--lr_warmup_steps=10%`, etc.
190
200
191
-
https://github.com/kohya-ss/sd-scripts/pull/1393
192
201
- When enlarging images in the script (when the size of the training image is small and bucket_no_upscale is not specified), it has been changed to use Pillow's resize and LANCZOS interpolation instead of OpenCV2's resize and Lanczos4 interpolation. The quality of the image enlargement may be slightly improved. PR [#1426](https://github.com/kohya-ss/sd-scripts/pull/1426) Thanks to sdbds!
193
202
194
203
- Sample image generation during training now works on non-CUDA devices. PR [#1433](https://github.com/kohya-ss/sd-scripts/pull/1433) Thanks to millie-v!
@@ -258,6 +267,12 @@ https://github.com/kohya-ss/sd-scripts/pull/1290) Thanks to frodo821!
258
267
259
268
- Added a prompt option `--f` to `gen_imgs.py` to specify the file name when saving. Also, Diffusers-based keys for LoRA weights are now supported.
0 commit comments