This extension is for AUTOMATIC1111's Stable Diffusion web UI, it allows to add a BiRefNet section to the original Stable Diffusion WebUI's Extras tab to benefit from BiRefNet background removal feature.
Find the UI for BiRefNet background removal in the Extras tab after installing the extension.
- Open "Extensions" tab.
- Open "Install from URL" tab in the tab.
- Enter https://github.com/dimitribarbot/sd-webui-birefnet.git to "URL for extension's git repository".
- Press "Install" button.
- It may take a few minutes to install the extension. At the end, you will see the message "Installed into stable-diffusion-webui\extensions\sd-webui-birefnet. Use Installed tab to restart".
- Go to "Installed" tab, click "Check for updates", and then click "Apply and restart UI". (The next time you can also use these buttons to update this extension.)
The available models are:
- General: A pre-trained model for general use cases.
- General-Lite: A light pre-trained model for general use cases.
- General-Lite-2K: A light pre-trained model for general use cases in high resolution (2560x1440).
- Portrait: A pre-trained model for human portraits.
- Matting: A pre-trained model for general trimap-free matting use.
- DIS: A pre-trained model for dichotomous image segmentation (DIS).
- HRSOD: A pre-trained model for high-resolution salient object detection (HRSOD).
- COD: A pre-trained model for concealed object detection (COD).
- DIS-TR_TEs: A pre-trained model with massive dataset.
Model files go here (automatically downloaded if the folder is not present during first run): stable-diffusion-webui/models/birefnet
.
If necessary, they can be downloaded from:
- General ➔
model.safetensors
must be renamedGeneral.safetensors
- General-Lite ➔
model.safetensors
must be renamedGeneral-Lite.safetensors
- General-Lite-2K ➔
model.safetensors
must be renamedGeneral-Lite-2K.safetensors
- Portrait ➔
model.safetensors
must be renamedPortrait.safetensors
- Matting ➔
model.safetensors
must be renamedMatting.safetensors
- DIS ➔
model.safetensors
must be renamedDIS.safetensors
- HRSOD ➔
model.safetensors
must be renamedHRSOD.safetensors
- COD ➔
model.safetensors
must be renamedCOD.safetensors
- DIS-TR_TEs ➔
model.safetensors
must be renamedDIS-TR_TEs.safetensors
Routes have been added to the Automatic1111's SD WebUI API:
/birefnet/single
: remove background of a single image./birefnet/multi
: remove background of a multiple images.
Both endpoints share these parameters:
return_foreground
: whether to return foreground image (input image without its background).return_mask
: whether to return mask (can be used for inpainting).return_edge_mask
: whether to return edge mask (can be used to blend foreground image with another background).edge_mask_width
: edge mask width in pixels. Default to 64.model_name
:General
,General-Lite
,General-Lite-2K
,Portrait
,Matting
,DIS
,HRSOD
,COD
orDIS-TR_TEs
. BiRefNet model to be used. Default toGeneral
.output_dir
: directory to save output images.output_extension
: output image file extension (without leading dot,png
by default).device_id
: GPU device id.send_output
:true
if you want output images to be sent as base64 encoded strings,false
otherwise.save_output
:true
if you want output images to be saved inoutput_dir
,false
otherwise.use_model_cache
:true
if you want BiRefNet model to be cached for subsequent calls using same model name,false
otherwise.flag_force_cpu
: force cpu inference.
Additional parameters for the /birefnet/single
endpoint are:
image
: source image. It can either be a path to an existing file or an url or a base64 encoded string.resolution
: source image resolution. Keep it empty to automatically detect source image size.
Additional parameters for the /birefnet/multi
endpoint are:
inputs
: an array of objects with the following arguments:image
: source image. It can either be a path to an existing file or an url or a base64 encoded string.resolution
: source image resolution. Keep it empty to automatically detect source image size.
Original author's link: https://github.com/ZhengPeng7/BiRefNet