Skip to content
This repository was archived by the owner on Dec 10, 2023. It is now read-only.

Commit a8fae1a

Browse files
committed
Update README
1 parent 3ccb04e commit a8fae1a

File tree

6 files changed

+50
-14
lines changed

6 files changed

+50
-14
lines changed

README.md

Lines changed: 38 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,11 @@ See [official wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki
88

99
## Usage
1010

11+
### Enhanced img2img
12+
1113
Switch to **"img2img"** tab, under the **"script"** column, select **"enhanced img2img"**.
1214

13-
![](screenshot.png)
15+
![](screenshot_1.png)
1416

1517
- **Input directory**: a folder containing all the images you want to process.
1618
- **Output directory**: a folder to save output images.
@@ -25,12 +27,46 @@ Switch to **"img2img"** tab, under the **"script"** column, select **"enhanced i
2527
- **Files to process**: filenames of images you want to process. I recommend naming your images with a digit suffixes, e.g. `000233.png, 000234.png, 000235.png, ...` or `image_233.jpg, image_234.jpg, image_235.jpg, ...`. In this way, you can use `233,234,235` or simply `233-235` to assign these files. Otherwise, you need to give the full filenames like `image_a.webp,image_b.webp,image_c.webp`.
2628
- **Use deepbooru prompt**: use DeepDanbooru to predict image tags; if you have input some prompts in the prompt area, it will append to the end of the prompts.
2729
- **Using contextual information**: only if tags are present in both current and next frames' prediction results, this can improve accuracy (maybe).
28-
- **Use csv prompt list** and **input file path**: use a `.csv` file as prompts for each image, one line for one image.
2930
- **Loopback**: similar to the loopback script, this will run input images img2img twice to enhance AI's creativity.
3031
- **Firstpass width** and **firstpass height**: AI tends to be more creative when the firstpass size is smaller.
3132
- **Denoising strength**: denoising strength for the first pass, better be no higher than 0.4.
33+
- **Read tags from text files**: will read tags from text files with the same filename as the current input image.
34+
- **Text files directory**: Optional, will load from input dir if not specified
35+
- **Use csv prompt list** and **input file path**: use a `.csv` file as prompts for each image, one line for one image.
36+
37+
### Multi-frame rendering
38+
39+
Switch to **"img2img"** tab, under the **"script"** column, select **"multi-frame rendering"**, **should be used with ControlNet**. For more information, see: [the original post](https://xanthius.itch.io/multi-frame-rendering-for-stablediffusion).
40+
41+
![](screenshot_2.png)
42+
43+
- **Input directory**: a folder containing all the images you want to process.
44+
- **Output directory**: a folder to save output images.
45+
- **Initial denoise strength**: the denoising strength of the first frame. You can set the noise reduction strength of the first frame and the rest of the frames separately. The noise reduction strength of the rest of the frames is controlled through the img2img main interface.
46+
- **Append interrogated prompt at each iteration**: use CLIP or DeepDanbooru to predict image tags; if you have input some prompts in the prompt area, it will append to the end of the prompts.
47+
- **Third frame (reference) image**: the image used to put at the third frame.
48+
- None: use only two images, the previous frame and the current frame, without a third reference image.
49+
- FirstGen: use the **processed** first image as the reference image.
50+
- OriginalImg: use the **original** first image as the reference image.
51+
- Historical: use the second-to-last frame before the current frame as the reference image.
52+
- **Enable color correction**: use color correction based on the loopback image. When using a non-FirstGen image as the reference image, turn on to reduce color fading.
53+
- **Unfreeze Seed**: once checked, the basic seed value will be incremented by 1 automatically each time an image is generated.
54+
- **Loopback Source**: the images in the second frame.
55+
- Previous: generates the image from the previous generated image.
56+
- Currrent: generates the image from the current image.
57+
- First: generates the image from the first generated image.
58+
- **Read tags from text files**: will read tags from text files with the same filename as the current input image.
59+
- **Text files directory**: Optional, will load from input dir if not specified
60+
- **Use csv prompt list** and **input file path**: use a `.csv` file as prompts for each image, one line for one image.
61+
62+
## Tutorial video (in Chinese)
63+
64+
<iframe src="//player.bilibili.com/player.html?aid=563344169&bvid=BV1pv4y1o7An&cid=911472358&page=1" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"> </iframe>
65+
66+
<iframe src="//player.bilibili.com/player.html?aid=865839831&bvid=BV1R54y1M7u5&cid=1047760345&page=1" scrolling="no" border="0" frameborder="no" framespacing="0" allowfullscreen="true"> </iframe>
3267

3368
## Credit
3469

3570
AUTOMATIC1111's WebUI - https://github.com/AUTOMATIC1111/stable-diffusion-webui
71+
3672
Multi-frame Rendering - https://xanthius.itch.io/multi-frame-rendering-for-stablediffusion

screenshot.png

-126 KB
Binary file not shown.

screenshot_1.png

161 KB
Loading

screenshot_2.png

107 KB
Loading

scripts/enhanced_img2img.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ def ui(self, is_img2img):
122122

123123
with gr.Row():
124124
txt_path = gr.Textbox(
125-
label='Text files directory (Optional, will load from input dir if not specified)',
125+
label='Text files directory (optional, will load from input dir if not specified)',
126126
lines=1)
127127

128128
with gr.Row():

scripts/multi_frame_rendering.py

Lines changed: 11 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@
3030

3131
class Script(scripts.Script):
3232
def title(self):
33-
return "(Beta) Multi-frame Video rendering"
33+
return "Multi-frame rendering"
3434

3535
def show(self, is_img2img):
3636
return is_img2img
@@ -44,34 +44,34 @@ def ui(self, is_img2img):
4444
minimum=0,
4545
maximum=1,
4646
step=0.05,
47-
label='Initial Denoise Strength',
47+
label='Initial denoising strength',
4848
value=1,
4949
elem_id=self.elem_id("first_denoise"))
5050
append_interrogation = gr.Dropdown(
5151
label="Append interrogated prompt at each iteration", choices=[
5252
"None", "CLIP", "DeepBooru"], value="None")
5353
third_frame_image = gr.Dropdown(
54-
label="Third Frame Image",
54+
label="Third frame (reference) image",
5555
choices=[
5656
"None",
5757
"FirstGen",
5858
"OriginalImg",
5959
"Historical"],
6060
value="FirstGen")
6161
color_correction_enabled = gr.Checkbox(
62-
label="Enable Color Correction",
62+
label="Enable color correction",
6363
value=False,
6464
elem_id=self.elem_id("color_correction_enabled"))
6565
unfreeze_seed = gr.Checkbox(
66-
label="Unfreeze Seed",
66+
label="Unfreeze seed",
6767
value=False,
6868
elem_id=self.elem_id("unfreeze_seed"))
6969
loopback_source = gr.Dropdown(
70-
label="Loopback Source",
70+
label="Loopback source",
7171
choices=[
72-
"PreviousFrame",
73-
"InputFrame",
74-
"FirstGen"],
72+
"Previous",
73+
"Current",
74+
"First"],
7575
value="InputFrame")
7676

7777
with gr.Row():
@@ -221,9 +221,9 @@ def run(
221221

222222
if(i > 0):
223223
loopback_image = p.init_images[0]
224-
if loopback_source == "InputFrame":
224+
if loopback_source == "Current":
225225
loopback_image = p.control_net_input_image
226-
elif loopback_source == "FirstGen":
226+
elif loopback_source == "First":
227227
loopback_image = history
228228

229229
if third_frame_image != "None":

0 commit comments

Comments
 (0)