Skip to content

Stable Diffusion WebUI Forge (SD Forge) is an alternative version of Stable Diffusion WebUI that features faster image generation for low-VRAM GPUs, among other improvements.

License

Notifications You must be signed in to change notification settings

sterlingangel/stable-diffusion-webui-forge

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Stable Diffusion WebUI Forge

Stable Diffusion WebUI Forge is a platform on top of Stable Diffusion WebUI (based on Gradio ) to make development easier, optimize resource management, speed up inference, and study experimental features.

The name "Forge" is inspired from "Minecraft Forge". This project is aimed at becoming SD WebUI's Forge.

Forge is currently based on SD-WebUI 1.10.1 at this commit. (Because original SD-WebUI is almost static now, Forge will sync with original WebUI every 90 days, or when important fixes.)

News

Aug 15: Flux BNB NF4 / GGUF Q8/Q5/Q4 are all natively supported with perfect GPU weight slider and Quene/Async Swap toggle and swap location toggle. (And no, no other software has these - 2024 Aug 15)

Aug 15: All Flux BNB NF4 / GGUF Q8/Q5/Q4 have perfect native LoRA support. (And no, no other software has these - 2024 Aug 15)

Installing Forge

If you are proficient in Git and you want to install Forge as another branch of SD-WebUI, please see here. In this way, you can reuse all SD checkpoints and all extensions you installed previously in your OG SD-WebUI, but you should know what you are doing.

If you know what you are doing, you can install Forge using same method as SD-WebUI. (Install Git, Python, Git Clone the forge repo https://github.com/lllyasviel/stable-diffusion-webui-forge.git and then run webui-user.bat).

Or you can just use this one-click installation package (with git and python included).

>>> Click Here to Download One-Click Package (CUDA 12.1 + Pytorch 2.3.1) <<<

Some other CUDA/Torch Versions:

Forge with CUDA 12.1 + Pytorch 2.3.1 <- Recommended

Forge with CUDA 12.4 + Pytorch 2.4 <- Fastest, but MSVC may be broken, xformers may not work

Forge with CUDA 12.1 + Pytorch 2.1 <- the previously used old environments

After you download, you uncompress, use update.bat to update, and use run.bat to run.

Note that running update.bat is important, otherwise you may be using a previous version with potential bugs unfixed.

image

Previous Versions

You can download previous versions here.

Forge Status

Based on manual test one-by-one:

Component Status Last Test
Basic Diffusion Normal 2024 July 27
GPU Memory Management System Normal 2024 July 27
LoRAs Normal 2024 July 27
All Preprocessors Normal 2024 July 27
All ControlNets Normal 2024 July 27
All IP-Adapters Normal 2024 July 27
All Instant-IDs Normal 2024 July 27
All Reference-only Methods Normal 2024 July 27
All Integrated Extensions Normal 2024 July 27
Popular Extensions (Adetailer, etc) Normal 2024 July 27
Gradio 4 UIs Normal 2024 July 27
Gradio 4 Forge Canvas Normal 2024 July 27
LoRA/Checkpoint Selection UI for Gradio 4 Normal 2024 July 27
Photopea/OpenposeEditor/etc for ControlNet Normal 2024 July 27
Wacom 128 level touch pressure support for Canvas Normal 2024 July 15
Microsoft Surface touch pressure support for Canvas Broken, pending fix 2024 July 29

Feel free to open issue if anything is broken and I will take a look every several days. If I do not update this "Forge Status" then it means I cannot reproduce any problem. In that case, fresh re-install should help most.

UnetPatcher

Below are self-supported single file of all codes to implement FreeU V2.

See also extension-builtin/sd_forge_freeu/scripts/forge_freeu.py:

import torch
import gradio as gr

from modules import scripts


def Fourier_filter(x, threshold, scale):
    # FFT
    x_freq = torch.fft.fftn(x.float(), dim=(-2, -1))
    x_freq = torch.fft.fftshift(x_freq, dim=(-2, -1))

    B, C, H, W = x_freq.shape
    mask = torch.ones((B, C, H, W), device=x.device)

    crow, ccol = H // 2, W // 2
    mask[..., crow - threshold:crow + threshold, ccol - threshold:ccol + threshold] = scale
    x_freq = x_freq * mask

    # IFFT
    x_freq = torch.fft.ifftshift(x_freq, dim=(-2, -1))
    x_filtered = torch.fft.ifftn(x_freq, dim=(-2, -1)).real

    return x_filtered.to(x.dtype)


def patch_freeu_v2(unet_patcher, b1, b2, s1, s2):
    model_channels = unet_patcher.model.diffusion_model.config["model_channels"]
    scale_dict = {model_channels * 4: (b1, s1), model_channels * 2: (b2, s2)}
    on_cpu_devices = {}

    def output_block_patch(h, hsp, transformer_options):
        scale = scale_dict.get(h.shape[1], None)
        if scale is not None:
            hidden_mean = h.mean(1).unsqueeze(1)
            B = hidden_mean.shape[0]
            hidden_max, _ = torch.max(hidden_mean.view(B, -1), dim=-1, keepdim=True)
            hidden_min, _ = torch.min(hidden_mean.view(B, -1), dim=-1, keepdim=True)
            hidden_mean = (hidden_mean - hidden_min.unsqueeze(2).unsqueeze(3)) / (hidden_max - hidden_min).unsqueeze(2).unsqueeze(3)

            h[:, :h.shape[1] // 2] = h[:, :h.shape[1] // 2] * ((scale[0] - 1) * hidden_mean + 1)

            if hsp.device not in on_cpu_devices:
                try:
                    hsp = Fourier_filter(hsp, threshold=1, scale=scale[1])
                except:
                    print("Device", hsp.device, "does not support the torch.fft.")
                    on_cpu_devices[hsp.device] = True
                    hsp = Fourier_filter(hsp.cpu(), threshold=1, scale=scale[1]).to(hsp.device)
            else:
                hsp = Fourier_filter(hsp.cpu(), threshold=1, scale=scale[1]).to(hsp.device)

        return h, hsp

    m = unet_patcher.clone()
    m.set_model_output_block_patch(output_block_patch)
    return m


class FreeUForForge(scripts.Script):
    sorting_priority = 12  # It will be the 12th item on UI.

    def title(self):
        return "FreeU Integrated"

    def show(self, is_img2img):
        # make this extension visible in both txt2img and img2img tab.
        return scripts.AlwaysVisible

    def ui(self, *args, **kwargs):
        with gr.Accordion(open=False, label=self.title()):
            freeu_enabled = gr.Checkbox(label='Enabled', value=False)
            freeu_b1 = gr.Slider(label='B1', minimum=0, maximum=2, step=0.01, value=1.01)
            freeu_b2 = gr.Slider(label='B2', minimum=0, maximum=2, step=0.01, value=1.02)
            freeu_s1 = gr.Slider(label='S1', minimum=0, maximum=4, step=0.01, value=0.99)
            freeu_s2 = gr.Slider(label='S2', minimum=0, maximum=4, step=0.01, value=0.95)

        return freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2

    def process_before_every_sampling(self, p, *script_args, **kwargs):
        # This will be called before every sampling.
        # If you use highres fix, this will be called twice.

        freeu_enabled, freeu_b1, freeu_b2, freeu_s1, freeu_s2 = script_args

        if not freeu_enabled:
            return

        unet = p.sd_model.forge_objects.unet

        unet = patch_freeu_v2(unet, freeu_b1, freeu_b2, freeu_s1, freeu_s2)

        p.sd_model.forge_objects.unet = unet

        # Below codes will add some logs to the texts below the image outputs on UI.
        # The extra_generation_params does not influence results.
        p.extra_generation_params.update(dict(
            freeu_enabled=freeu_enabled,
            freeu_b1=freeu_b1,
            freeu_b2=freeu_b2,
            freeu_s1=freeu_s1,
            freeu_s2=freeu_s2,
        ))

        return

See also Forge's Unet Implementation.

Under Construction

WebUI Forge is now under some constructions, and docs / UI / functionality may change with updates.

About

Stable Diffusion WebUI Forge (SD Forge) is an alternative version of Stable Diffusion WebUI that features faster image generation for low-VRAM GPUs, among other improvements.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 93.4%
  • JavaScript 2.2%
  • Cuda 2.0%
  • C++ 1.1%
  • CSS 0.6%
  • HTML 0.4%
  • Other 0.3%