Skip to content

[Refactor] Better align from_single_file logic with from_pretrained #7496

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 113 commits into from
May 9, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
113 commits
Select commit Hold shift + click to select a range
f03ea10
refactor unet single file loading a bit.
sayakpaul Mar 14, 2024
bfaa0d8
retrieve the unet from create_diffusers_unet_model_from_ldm
sayakpaul Mar 14, 2024
bc32a9d
update
DN6 Mar 18, 2024
5bb7d56
update
DN6 Mar 18, 2024
56863a2
updae
DN6 Mar 21, 2024
2cd8175
update
DN6 Mar 21, 2024
92cf552
update
DN6 Mar 21, 2024
57aa8be
update
DN6 Mar 25, 2024
8c9a890
update
DN6 Mar 25, 2024
5cb4f12
update
DN6 Mar 26, 2024
0dd26eb
update
DN6 Mar 26, 2024
8e0cdd2
update
DN6 Mar 26, 2024
5203eb6
update
DN6 Mar 26, 2024
17f3cbd
update
DN6 Mar 26, 2024
3d0bc40
update
DN6 Mar 27, 2024
88389a2
update
DN6 Mar 27, 2024
3850ef8
update
DN6 Mar 27, 2024
7997372
update
DN6 Mar 27, 2024
64bdee0
update
DN6 Mar 27, 2024
f12217a
update
DN6 Mar 29, 2024
8bf783a
update
DN6 Mar 29, 2024
2838e8a
update
DN6 Mar 29, 2024
c8665f1
update
DN6 Mar 29, 2024
76b838a
update
DN6 Mar 29, 2024
2e0dd15
update
DN6 Mar 29, 2024
b2fe190
update
DN6 Mar 29, 2024
9d692b9
update
DN6 Mar 29, 2024
ebebc43
update
DN6 Mar 29, 2024
3aa537b
update
DN6 Mar 29, 2024
3f3582d
update
DN6 Mar 29, 2024
adaf292
update
DN6 Mar 29, 2024
d8cd73d
update
DN6 Mar 29, 2024
06b2a77
update
DN6 Mar 29, 2024
4b16059
update
DN6 Apr 2, 2024
7421a32
update
DN6 Apr 2, 2024
b5497a9
update
DN6 Apr 2, 2024
17fa96f
update
DN6 Apr 2, 2024
e85570c
update
DN6 Apr 2, 2024
2c3b5e7
update
DN6 Apr 2, 2024
243bbf8
update
DN6 Apr 3, 2024
a48381d
Merge branch 'main' into single-file-updates
DN6 Apr 3, 2024
c60eb53
update
DN6 Apr 3, 2024
2f88a9a
update
DN6 Apr 3, 2024
3dcc07f
update
DN6 Apr 3, 2024
4504fdf
update
DN6 Apr 4, 2024
38c6f8e
update
DN6 Apr 4, 2024
bbdfe9d
update
DN6 Apr 4, 2024
88a7a94
update
DN6 Apr 4, 2024
7c9fffa
tests
DN6 Apr 5, 2024
0ae2137
update
DN6 Apr 8, 2024
de97fbc
update
DN6 Apr 8, 2024
66df5f7
update
DN6 Apr 9, 2024
8d4a1d2
Update docs/source/en/api/single_file.md
DN6 Apr 9, 2024
93da824
Update docs/source/en/api/single_file.md
DN6 Apr 9, 2024
ab09847
update
DN6 Apr 9, 2024
18e1dec
update
DN6 Apr 9, 2024
4c7a060
update
DN6 Apr 9, 2024
aea47f3
update
DN6 Apr 9, 2024
7ccd797
update
DN6 Apr 9, 2024
e1c7607
update
DN6 Apr 9, 2024
5f05f91
update
DN6 Apr 10, 2024
2ea357d
update
DN6 Apr 10, 2024
912b49b
update
DN6 Apr 10, 2024
be1e70b
merge upstream
DN6 Apr 10, 2024
695cedd
update
DN6 Apr 10, 2024
759afb2
update
DN6 Apr 10, 2024
25c7ed7
update
DN6 Apr 10, 2024
5dca42f
update
DN6 Apr 11, 2024
ba74a33
Merge branch 'main' into single-file-updates
sayakpaul Apr 11, 2024
083c494
Update docs/source/en/api/loaders/single_file.md
DN6 Apr 17, 2024
3f39e48
Update src/diffusers/loaders/single_file.py
DN6 Apr 17, 2024
bbe4b78
Update docs/source/en/api/loaders/single_file.md
DN6 Apr 17, 2024
d8d2bdc
Update docs/source/en/api/loaders/single_file.md
DN6 Apr 17, 2024
8e72865
Update docs/source/en/api/loaders/single_file.md
DN6 Apr 17, 2024
492161e
Merge branch 'single-file-updates' of https://github.com/huggingface/…
DN6 Apr 17, 2024
ccb130f
Update docs/source/en/api/loaders/single_file.md
DN6 Apr 17, 2024
39e8697
update
DN6 Apr 19, 2024
4a78284
Merge branch 'single-file-updates' of https://github.com/huggingface/…
DN6 Apr 19, 2024
e47b4a1
Merge branch 'single-file-updates-changes' into single-file-updates
DN6 Apr 19, 2024
109b997
update
DN6 Apr 19, 2024
351a520
update
DN6 Apr 19, 2024
0253e61
update
DN6 Apr 19, 2024
e3d4f08
update
DN6 Apr 22, 2024
e778b7a
update
DN6 Apr 22, 2024
53b16fc
update
DN6 Apr 22, 2024
7127f9f
update
DN6 Apr 24, 2024
2bd6c28
update
DN6 Apr 25, 2024
0e4630d
Merge branch 'main' into single-file-updates
DN6 Apr 25, 2024
9cecfb9
update
DN6 Apr 25, 2024
a775ad0
update
DN6 Apr 25, 2024
2dd9a0b
update
DN6 Apr 26, 2024
a2a0030
update
DN6 Apr 26, 2024
7e7cbd6
update
DN6 Apr 26, 2024
03a2ed8
update
DN6 Apr 26, 2024
0051843
update
DN6 Apr 29, 2024
a5c78c2
update
DN6 Apr 29, 2024
96f1b2e
Merge branch 'main' into single-file-updates
DN6 Apr 30, 2024
47f825d
update
DN6 May 1, 2024
8e41325
Merge branch 'main' into single-file-updates
DN6 May 7, 2024
bd2e73f
update
DN6 May 7, 2024
f5e4017
update
DN6 May 7, 2024
954c20a
update
DN6 May 7, 2024
a04562f
update
DN6 May 7, 2024
4a8f072
update
DN6 May 7, 2024
cc16cc8
update
DN6 May 8, 2024
28bf5ad
update
DN6 May 8, 2024
8387950
update
DN6 May 8, 2024
fff5297
update
DN6 May 8, 2024
696b258
update
DN6 May 8, 2024
d364604
update
DN6 May 9, 2024
6a22444
update
DN6 May 9, 2024
c61779d
update
DN6 May 9, 2024
f211c04
Merge branch 'main' into single-file-updates
DN6 May 9, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .github/workflows/push_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ jobs:
shell: bash
strategy:
matrix:
module: [models, schedulers, lora, others]
module: [models, schedulers, lora, others, single_file]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't we also include this in the nightly tests?

steps:
- name: Checkout diffusers
uses: actions/checkout@v3
Expand Down
233 changes: 222 additions & 11 deletions docs/source/en/api/loaders/single_file.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,28 +10,239 @@ an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express o
specific language governing permissions and limitations under the License.
-->

# Single files
# Loading Pipelines and Models via `from_single_file`

Diffusers supports loading pretrained pipeline (or model) weights stored in a single file, such as a `ckpt` or `safetensors` file. These single file types are typically produced from community trained models. There are three classes for loading single file weights:
The `from_single_file` method allows you to load supported pipelines using a single checkpoint file as opposed to the folder format used by Diffusers. This is useful if you are working with many of the Stable Diffusion Web UI's (such as A1111) that extensively rely on a single file to distribute all the components of a diffusion model.

- [`FromSingleFileMixin`] supports loading pretrained pipeline weights stored in a single file, which can either be a `ckpt` or `safetensors` file.
- [`FromOriginalVAEMixin`] supports loading a pretrained [`AutoencoderKL`] from pretrained ControlNet weights stored in a single file, which can either be a `ckpt` or `safetensors` file.
- [`FromOriginalControlnetMixin`] supports loading pretrained ControlNet weights stored in a single file, which can either be a `ckpt` or `safetensors` file.
The `from_single_file` method also supports loading models in their originally distributed format. This means that supported models that have been finetuned with other services can be loaded directly into supported Diffusers model objects and pipelines.

## Pipelines that currently support `from_single_file` loading
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we provide a utility in diffusers to know this programmatically? It's a nice to have rather than a must have, though.

diffusers.pipelines.single_file_compatible_class

Something like this ^.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can handle this in a follow up PR


- [`StableDiffusionPipeline`]
- [`StableDiffusionImg2ImgPipeline`]
- [`StableDiffusionInpaintPipeline`]
- [`StableDiffusionControlNetPipeline`]
- [`StableDiffusionControlNetImg2ImgPipeline`]
- [`StableDiffusionControlNetInpaintPipeline`]
- [`StableDiffusionUpscalePipeline`]
- [`StableDiffusionXLPipeline`]
- [`StableDiffusionXLImg2ImgPipeline`]
- [`StableDiffusionXLInpaintPipeline`]
- [`StableDiffusionXLInstructPix2PixPipeline`]
- [`StableDiffusionXLControlNetPipeline`]
- [`StableDiffusionXLKDiffusionPipeline`]
- [`LatentConsistencyModelPipeline`]
- [`LatentConsistencyModelImg2ImgPipeline`]
- [`StableDiffusionControlNetXSPipeline`]
- [`StableDiffusionXLControlNetXSPipeline`]
- [`LEditsPPPipelineStableDiffusion`]
- [`LEditsPPPipelineStableDiffusionXL`]
- [`PIAPipeline`]

## Models that currently support `from_single_file` loading

- [`UNet2DConditionModel`]
- [`StableCascadeUNet`]
- [`AutoencoderKL`]
- [`ControlNetModel`]

## Usage Examples

## Loading a Pipeline using `from_single_file`

```python
from diffusers import StableDiffusionXLPipeline

ckpt_path = "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0_0.9vae.safetensors"
pipe = StableDiffusionXLPipeline.from_single_file(ckpt_path)
```

## Setting components in a Pipeline using `from_single_file`

Swap components of the pipeline by passing them directly to the `from_single_file` method. e.g If you would like use a different scheduler than the pipeline default.

```python
from diffusers import StableDiffusionXLPipeline, DDIMScheduler

ckpt_path = "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0_0.9vae.safetensors"

scheduler = DDIMScheduler()
pipe = StableDiffusionXLPipeline.from_single_file(ckpt_path, scheduler=scheduler)

```

```python
from diffusers import StableDiffusionPipeline, ControlNetModel

ckpt_path = "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.safetensors"

controlnet = ControlNetModel.from_pretrained("https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.safetensors")
pipe = StableDiffusionPipeline.from_single_file(ckpt_path, controlnet=controlnet)

```

## Loading a Model using `from_single_file`

```python
from diffusers import StableCascadeUNet

ckpt_path = "https://huggingface.co/stabilityai/stable-cascade/blob/main/stage_b_lite.safetensors"
model = StableCascadeUNet.from_single_file(ckpt_path)

```

## Using a Diffusers model repository to configure single file loading

Under the hood, `from_single_file` will try to determine a model repository to use to configure the components of the pipeline. You can also pass in a repository id to the `config` argument of the `from_single_file` method to explicitly set the repository to use.

```python
from diffusers import StableDiffusionXLPipeline

ckpt_path = "https://huggingface.co/segmind/SSD-1B/blob/main/SSD-1B.safetensors"
repo_id = "segmind/SSD-1B"

pipe = StableDiffusionXLPipeline.from_single_file(ckpt_path, config=repo_id)

```

## Override configuration options when using single file loading
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should swap this section with the next one, because in the example we used the config argument that hasn't been explained yet (but it will be in the next section)


Override the default model or pipeline configuration options when using `from_single_file` by passing in the relevant arguments directly to the `from_single_file` method. Any argument that is supported by the model or pipeline class can be configured in this way:

```python
from diffusers import StableDiffusionXLInstructPix2PixPipeline

ckpt_path = "https://huggingface.co/stabilityai/cosxl/blob/main/cosxl_edit.safetensors"
pipe = StableDiffusionXLInstructPix2PixPipeline.from_single_file(ckpt_path, config="diffusers/sdxl-instructpix2pix-768", is_cosxl_edit=True)

```

```python
from diffusers import UNet2DConditionModel

ckpt_path = "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0_0.9vae.safetensors"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's use a different SDXL-finetuned single-file format checkpoint here so that users understand that the configs aren't just limited to the original checkpoints.

model = UNet2DConditionModel.from_single_file(ckpt_path, upcast_attention=True)

```

In the example above, since we explicitly passed `repo_id="segmind/SSD-1B"`, it will use this [configuration file](https://huggingface.co/segmind/SSD-1B/blob/main/unet/config.json) from the "unet" subfolder in `"segmind/SSD-1B"` to configure the unet component included in the checkpoint; Similarly, it will use the `config.json` file from `"vae"` subfolder to configure the vae model, `config.json` file from text_encoder folder to configure text_encoder and so on.

Note that most of the time you do not need to explicitly a `config` argument, `from_single_file` will automatically map the checkpoint to a repo id (we will discuss this in more details in next section). However, this can be useful in cases where model components might have been changed from what was originally distributed or in cases where a checkpoint file might not have the necessary metadata to correctly determine the configuration to use for the pipeline.

<Tip>

To learn more about how to load single file weights, see the [Load different Stable Diffusion formats](../../using-diffusers/other-formats) loading guide.

</Tip>

## FromSingleFileMixin
## Working with local files

[[autodoc]] loaders.single_file.FromSingleFileMixin
As of `diffusers>=0.28.0` the `from_single_file` method will attempt to configure a pipeline or model by first inferring the model type from the checkpoint file and then using the model type to determine the appropriate model repo configuration to use from the Hugging Face Hub. For example, any single file checkpoint based on the Stable Diffusion XL base model will use the [`stabilityai/stable-diffusion-xl-base-1.0`](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) model repo to configure the pipeline.

## FromOriginalVAEMixin
If you are working in an environment with restricted internet access, it is recommended to download the config files and checkpoints for the model to your preferred directory and pass the local paths to the `pretrained_model_link_or_path` and `config` arguments of the `from_single_file` method.

[[autodoc]] loaders.autoencoder.FromOriginalVAEMixin
```python
from huggingface_hub import hf_hub_download, snapshot_download

my_local_checkpoint_path = hf_hub_download(
repo_id="segmind/SSD-1B",
filename="SSD-1B.safetensors"
)

my_local_config_path = snapshot_download(
repo_id="segmind/SSD-1B",
allowed_patterns=["*.json", "**/*.json", "*.txt", "**/*.txt"]
)

pipe = StableDiffusionXLPipeline.from_single_file(my_local_checkpoint_path, config=my_local_config_path, local_files_only=True)

```

By default this will download the checkpoints and config files to the [Hugging Face Hub cache directory](https://huggingface.co/docs/huggingface_hub/en/guides/manage-cache). You can also specify a local directory to download the files to by passing the `local_dir` argument to the `hf_hub_download` and `snapshot_download` functions.

```python
from huggingface_hub import hf_hub_download, snapshot_download

my_local_checkpoint_path = hf_hub_download(
repo_id="segmind/SSD-1B",
filename="SSD-1B.safetensors"
local_dir="my_local_checkpoints"
)

my_local_config_path = snapshot_download(
repo_id="segmind/SSD-1B",
allowed_patterns=["*.json", "**/*.json", "*.txt", "**/*.txt"]
local_dir="my_local_config"
)

pipe = StableDiffusionXLPipeline.from_single_file(my_local_checkpoint_path, config=my_local_config_path, local_files_only=True)

```

## Working with local files on file systems that do not support symlinking

By default the `from_single_file` method relies on the `huggingface_hub` caching mechanism to fetch and store checkpoints and config files for models and pipelines. If you are working with a file system that does not support symlinking, it is recommended that you first download the checkpoint file to a local directory and disable symlinking by passing the `local_dir_use_symlink=False` argument to the `hf_hub_download` and `snapshot_download` functions.

```python
from huggingface_hub import hf_hub_download, snapshot_download

my_local_checkpoint_path = hf_hub_download(
repo_id="segmind/SSD-1B",
filename="SSD-1B.safetensors"
local_dir="my_local_checkpoints",
local_dir_use_symlinks=False
)
print("My local checkpoint: ", my_local_checkpoint_path)

my_local_config_path = snapshot_download(
repo_id="segmind/SSD-1B",
allowed_patterns=["*.json", "**/*.json", "*.txt", "**/*.txt"]
local_dir_use_symlinks=False,
)
print("My local config: ", my_local_config_path)

```

Then pass the local paths to the `pretrained_model_link_or_path` and `config` arguments of the `from_single_file` method.

```python
pipe = StableDiffusionXLPipeline.from_single_file(my_local_checkpoint_path, config=my_local_config_path, local_files_only=True)

```

<Tip>
Disabling symlinking means that the `huggingface_hub` caching mechanism has no way to determine whether a file has already been downloaded to the local directory. This means that the `hf_hub_download` and `snapshot_download` functions will download files to the local directory each time they are executed. If you are disabling symlinking, it is recommended that you separate the model download and loading steps to avoid downloading the same file multiple times.

</Tip>

## Using the original configuration file of a model

If you would like to configure the parameters of the model components in the pipeline using the orignal YAML configuration file, you can pass a local path or url to the original configuration file to the `original_config` argument of the `from_single_file` method.

```python
from diffusers import StableDiffusionXLPipeline

ckpt_path = "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/blob/main/sd_xl_base_1.0_0.9vae.safetensors"
repo_id = "stabilityai/stable-diffusion-xl-base-1.0"
original_config = "https://raw.githubusercontent.com/Stability-AI/generative-models/main/configs/inference/sd_xl_base.yaml"

pipe = StableDiffusionXLPipeline.from_single_file(ckpt_path, original_config=original_config)
```

In the example above, the `original_config` file is only used to configure the parameters of the individual model components of the pipeline. For example it will be used to configure parameters such as the `in_channels` of the `vae` model and `unet` model. It is not used to determine the type of component objects in the pipeline.


<Tip>
When using `original_config` with local_files_only=True`, Diffusers will attempt to infer the components based on the type signatures of pipeline class, rather than attempting to fetch the pipeline config from the Hugging Face Hub. This is to prevent backwards breaking changes in existing code that might not be able to connect to the internet to fetch the necessary pipeline config files.

This is not as reliable as providing a path to a local config repo and might lead to errors when configuring the pipeline. To avoid this, please run the pipeline with `local_files_only=False` once to download the appropriate pipeline config files to the local cache.
</Tip>


## FromSingleFileMixin

[[autodoc]] loaders.single_file.FromSingleFileMixin

## FromOriginalControlnetMixin
## FromOriginalModelMixin

[[autodoc]] loaders.controlnet.FromOriginalControlNetMixin
[[autodoc]] loaders.single_file_model.FromOriginalModelMixin
1 change: 1 addition & 0 deletions src/diffusers/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,6 +27,7 @@

_import_structure = {
"configuration_utils": ["ConfigMixin"],
"loaders": ["FromOriginalModelMixin"],
"models": [],
"pipelines": [],
"schedulers": [],
Expand Down
12 changes: 8 additions & 4 deletions src/diffusers/configuration_utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -340,6 +340,8 @@ def load_config(

"""
cache_dir = kwargs.pop("cache_dir", None)
local_dir = kwargs.pop("local_dir", None)
local_dir_use_symlinks = kwargs.pop("local_dir_use_symlinks", "auto")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What does "auto" strategy correspond to?

force_download = kwargs.pop("force_download", False)
resume_download = kwargs.pop("resume_download", None)
proxies = kwargs.pop("proxies", None)
Expand All @@ -364,13 +366,13 @@ def load_config(
if os.path.isfile(pretrained_model_name_or_path):
config_file = pretrained_model_name_or_path
elif os.path.isdir(pretrained_model_name_or_path):
if os.path.isfile(os.path.join(pretrained_model_name_or_path, cls.config_name)):
# Load from a PyTorch checkpoint
config_file = os.path.join(pretrained_model_name_or_path, cls.config_name)
elif subfolder is not None and os.path.isfile(
if subfolder is not None and os.path.isfile(
os.path.join(pretrained_model_name_or_path, subfolder, cls.config_name)
):
config_file = os.path.join(pretrained_model_name_or_path, subfolder, cls.config_name)
elif os.path.isfile(os.path.join(pretrained_model_name_or_path, cls.config_name)):
# Load from a PyTorch checkpoint
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# Load from a PyTorch checkpoint
# Load from a PyTorch checkpoint (SD checkpoints usually have some configuration details in them)

config_file = os.path.join(pretrained_model_name_or_path, cls.config_name)
else:
raise EnvironmentError(
f"Error no file named {cls.config_name} found in directory {pretrained_model_name_or_path}."
Expand All @@ -390,6 +392,8 @@ def load_config(
user_agent=user_agent,
subfolder=subfolder,
revision=revision,
local_dir=local_dir,
local_dir_use_symlinks=local_dir_use_symlinks,
)
except RepositoryNotFoundError:
raise EnvironmentError(
Expand Down
7 changes: 2 additions & 5 deletions src/diffusers/loaders/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,9 +54,7 @@ def text_encoder_attn_modules(text_encoder):
_import_structure = {}

if is_torch_available():
_import_structure["autoencoder"] = ["FromOriginalVAEMixin"]

_import_structure["controlnet"] = ["FromOriginalControlNetMixin"]
_import_structure["single_file_model"] = ["FromOriginalModelMixin"]
_import_structure["unet"] = ["UNet2DConditionLoadersMixin"]
_import_structure["utils"] = ["AttnProcsLayers"]
if is_transformers_available():
Expand All @@ -70,8 +68,7 @@ def text_encoder_attn_modules(text_encoder):

if TYPE_CHECKING or DIFFUSERS_SLOW_IMPORT:
if is_torch_available():
from .autoencoder import FromOriginalVAEMixin
from .controlnet import FromOriginalControlNetMixin
from .single_file_model import FromOriginalModelMixin
from .unet import UNet2DConditionLoadersMixin
from .utils import AttnProcsLayers

Expand Down
Loading