-
-
Notifications
You must be signed in to change notification settings - Fork 40
API
A thread-local container of data. Context
is an instance of threading.local
, so you can set custom attributes like context.foo = 2
at any time, to track temporary data in a thread-safe manner. This is especially important when using one thread per GPU.
class Context:
models: dict = {} # model_type to model object. e.g. 'stable-diffusion': loaded_model_in_memory
model_paths: dict = {} # required. model_type to the path to the model file. e.g. 'stable-diffusion': 'D:\\path\\to\\model.ckpt'
model_configs: dict = {} # optional. model_type to the path to the config file, for custom models. e.g. 'stable-diffusion': 'D:\\pony_diffusion.yaml'
device: str = 'cuda' # 'cuda' or 'cuda:0', or any 'cuda:N', or 'cpu'
device_name: str = None # optional
half_precision: bool = True
vram_optimizations: set = {} # 'KEEP_FS_AND_CS_IN_CPU', 'SET_ATTENTION_STEP_TO_4' or 'KEEP_ENTIRE_MODEL_IN_CPU'
Methods for loading/unloading models from memory, scanning models, as well as downloading/resolving known models from the models db.
load_model(context: Context, model_type: str, **kwargs)
unload_model(context: Context, model_type: str, **kwargs)
download_model(model_type: str, model_id: str, download_base_dir: str=None, subdir_for_model_type=True)
download_models(models: dict, download_base_dir: str=None, subdir_for_model_type=True)
resolve_downloaded_model_path(model_type: str, model_id: str, download_base_dir: str=None, subdir_for_model_type=True)
get_model_info_from_db(quick_hash=None, model_type=None, model_id=None)
scan_model(file_path)
Supported values for model_type
are stable-diffusion
, vae
, hypernetwork
, gfpgan
, realesrgan
.
If the model_type
is stable-diffusion
, then load_model()
accepts an additional scan_model: bool
argument. You can set it to False
to skip scanning the stable diffusion model (for malicious content) while loading, to save time.
Methods for generating content using Stable Diffusion. Please ensure that the stable-diffusion
model is loaded into memory before calling these methods.
generate_images(
context: Context,
prompt: str = "",
negative_prompt: str = "",
seed: int = 42,
width: int = 512,
height: int = 512,
num_outputs: int = 1,
num_inference_steps: int = 25,
guidance_scale: float = 7.5,
init_image = None,
init_image_mask = None,
prompt_strength: float = 0.8,
preserve_init_image_color_profile = False,
sampler_name: str = "euler_a", # "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms",
# "dpm_solver_stability", "dpmpp_2s_a", "dpmpp_2m", "dpmpp_sde", "dpm_fast"
# "dpm_adaptive"
hypernetwork_strength: float = 0,
callback=None,
)
Supported samplers (14): "ddim", "plms", "heun", "euler", "euler_a", "dpm2", "dpm2_a", "lms", "dpm_solver_stability", "dpmpp_2s_a", "dpmpp_2m", "dpmpp_sde", "dpm_fast", "dpm_adaptive"
Note: img2img only supports DDIM. We're looking for code contributions to allow other samplers with img2img.
Methods for applying filters to images, like face restoration, and upscaling. Please ensure that the corresponding model is loaded into memory before calling that filter. E.g. the gfpgan
model needs to be loaded with load_model(context, 'gfpgan')
before calling apply_filter(context, 'gfpgan', img)
.
apply_filters(context: Context, filters, images, **kwargs)
Methods for merging models. We're looking for code contributions to add training methods to this module.
merge_models(model0_path: str, model1_path: str, ratio: float, out_path: str, use_fp16=True)
log() # a basic logger, tracks INFO, milliseconds, and thread name
load_tensor_file(path)
save_tensor_file(data, path)
save_images(images: list, dir_path: str, file_name='image', output_format='JPEG', output_quality=75)
save_dicts(entries: list, dir_path: str, file_name='data', output_format='txt')
hash_bytes_quick(bytes)
hash_file_quick(model_path)
hash_url_quick(model_url)
img_to_base64_str(img, output_format="PNG", output_quality=75)
img_to_buffer(img, output_format="PNG", output_quality=75)
buffer_to_base64_str(buffered, output_format="PNG")
base64_str_to_buffer(img_str)
base64_str_to_img(img_str)
resize_img(img: Image, desired_width, desired_height)
apply_color_profile(orig_image: Image, image_to_modify: Image)
img_to_tensor(img: Image, batch_size, device, half_precision: bool, shift_range=False, unsqueeze=False)
get_image_latent_and_mask(context: Context, image: Image, mask: Image, desired_width, desired_height, batch_size)
latent_samples_to_images(context: Context, samples)
gc() # calls CPU-based GC, as well as torch GC
download_file(url: str, out_path: str) # Downloads large files (without storing them in memory), resumes incomplete downloads, shows progress bar