Skip to content

The Modular Diffusers #9672

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 150 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
150 commits
Select commit Hold shift + click to select a range
33f85fa
add
yiyixuxu Oct 14, 2024
52a7f1c
add dataflow info for each block in builder _repr_
yiyixuxu Oct 16, 2024
e8d0980
add img2img support - output does not match with non-modular pipeline…
yiyixuxu Oct 16, 2024
ad3f9a2
update img2img, result match
yiyixuxu Oct 17, 2024
ddea157
add from_pipe + run_blocks
yiyixuxu Oct 17, 2024
af9572d
controlnet
yiyixuxu Oct 19, 2024
2b6dcbf
fix controlnet
yiyixuxu Oct 20, 2024
70272b1
combine controlnetstep into contronetdesnoisestep
yiyixuxu Oct 20, 2024
46ec174
refactor guider, remove prepareguidance step to be combinedd into den…
yiyixuxu Oct 23, 2024
f1b3036
update pag guider - draft
yiyixuxu Oct 23, 2024
540d303
refactor guider
yiyixuxu Oct 26, 2024
6742f16
up
yiyixuxu Oct 27, 2024
005195c
add
yiyixuxu Oct 27, 2024
024a9f5
fix so that run_blocks can work with inputs in the state
yiyixuxu Oct 27, 2024
37e8dc7
remove img2img blocksgit status consolidate text2img and img2img
yiyixuxu Oct 27, 2024
8b811fe
refactor, from_pretrained, from_pipe, remove_blocks, replace_blocks
yiyixuxu Oct 30, 2024
c70a285
style
yiyixuxu Oct 30, 2024
ffc2992
add autostep (not complete)
yiyixuxu Nov 16, 2024
ace53e2
update/refactor
yiyixuxu Dec 10, 2024
a8df0f1
Modular APG (#10173)
hlky Dec 10, 2024
e50d614
only add model as expected_component when the model need to run for t…
yiyixuxu Dec 11, 2024
bc3d1c9
add model_cpu_offload_seq + _exlude_from_cpu_offload
yiyixuxu Dec 13, 2024
2b3cd2d
update
yiyixuxu Dec 14, 2024
b305c77
add offload support!
yiyixuxu Dec 14, 2024
0b90051
add vae encoder node
yiyixuxu Dec 19, 2024
806e8e6
Merge branch 'main' into modular-diffusers
yiyixuxu Dec 29, 2024
4fa85c7
add model_manager and global offloading method
yiyixuxu Dec 31, 2024
72d9a81
components manager
yiyixuxu Dec 31, 2024
10d4a77
style
yiyixuxu Dec 31, 2024
27dde51
add output arg to run_blocks
yiyixuxu Dec 31, 2024
8c02572
add memory_reserve_margin arg to auto offload
yiyixuxu Dec 31, 2024
a09ca7f
refactors: block __init__ no longer accept args. remove update_state…
yiyixuxu Jan 1, 2025
ed59f90
modular pipeline builder -> ModularPipeline
yiyixuxu Jan 1, 2025
72c5bf0
add a from_block class method to modular pipeline
yiyixuxu Jan 1, 2025
6c93626
remove run_blocks, just use __call__
yiyixuxu Jan 1, 2025
1d63306
make it work with lora
yiyixuxu Jan 3, 2025
2e0f5c8
start to add inpaint
yiyixuxu Jan 3, 2025
c12a05b
update to to not assume pipeline has hf_device_map
yiyixuxu Jan 3, 2025
54f410d
add inpaint
yiyixuxu Jan 6, 2025
6985906
controlnet input & remove the MultiPipelineBlocks class
yiyixuxu Jan 7, 2025
db94ca8
add controlnet inpaint + more refactor
yiyixuxu Jan 7, 2025
e973de6
fix contro;net inpaint preprocess
yiyixuxu Jan 8, 2025
7a34832
[modular] Stable Diffusion XL ControlNet Union (#10509)
hlky Jan 9, 2025
2220af6
refactor
yiyixuxu Jan 11, 2025
fb78f4f
Merge branch 'modular-diffusers' of github.com:huggingface/diffusers …
yiyixuxu Jan 11, 2025
0966663
adjust print
yiyixuxu Jan 11, 2025
7f897a9
fix
yiyixuxu Jan 12, 2025
a6804de
add controlnet union to auto & fix for pag
yiyixuxu Jan 12, 2025
7007f72
InputParam, OutputParam, get_auto_doc
yiyixuxu Jan 16, 2025
a226920
get_block_state make it less verbose
yiyixuxu Jan 17, 2025
77b5fa5
make it work with lora has both text_encoder & unet
yiyixuxu Jan 18, 2025
6e2fe26
fix more for lora
yiyixuxu Jan 18, 2025
68a5185
refactor more, ipadapter node, lora node
yiyixuxu Jan 20, 2025
d046cf7
block state + fix for num_images_per_prompt > 1 for denoise/controlne…
yiyixuxu Jan 22, 2025
71df158
Update src/diffusers/pipelines/stable_diffusion_xl/pipeline_stable_di…
yiyixuxu Jan 22, 2025
b3fb418
Merge branch 'modular-diffusers' of github.com:huggingface/diffusers …
yiyixuxu Jan 22, 2025
00cae4e
docstring doc doc doc
yiyixuxu Jan 23, 2025
ccb35ac
Merge branch 'main' into modular-diffusers
yiyixuxu Jan 23, 2025
00a3bc9
fix
yiyixuxu Jan 23, 2025
4bed3e3
up up
yiyixuxu Jan 26, 2025
c7020df
add model_info
yiyixuxu Jan 27, 2025
2c3e4ea
fix
yiyixuxu Jan 29, 2025
e5089d7
update
yiyixuxu Jan 31, 2025
8ddb20b
up
yiyixuxu Feb 1, 2025
cff0fd6
more refactor
yiyixuxu Feb 1, 2025
485f8d1
more refactor
yiyixuxu Feb 1, 2025
addaad0
more more more refactor
yiyixuxu Feb 3, 2025
12650e1
up
yiyixuxu Feb 4, 2025
96795af
Merge branch 'main' into modular-diffusers
yiyixuxu Apr 8, 2025
6a509ba
Merge branch 'main' into modular-diffusers
yiyixuxu May 1, 2025
a8e853b
[modular diffusers] more refactor (#11235)
yiyixuxu Jun 20, 2025
7ad01a6
rename modular_pipeline_block_mappings.py to modular_block_mapping
yiyixuxu Jun 20, 2025
5a8c1b5
add block mappings to modular_diffusers.stable_diffusion_xl.__init__
yiyixuxu Jun 20, 2025
8913d59
add to method to modular loader, copied from DiffusionPipeline, not t…
yiyixuxu Jun 20, 2025
45392cc
update the description of StableDiffusionXLDenoiseLoopWrapper
yiyixuxu Jun 20, 2025
9e58856
add __repr__ method for InsertableOrderedDict
yiyixuxu Jun 21, 2025
04c16d0
update
yiyixuxu Jun 21, 2025
083479c
ordereddict -> insertableOrderedDict; make sure loader to method works
yiyixuxu Jun 21, 2025
4751d45
shorten loop subblock name
yiyixuxu Jun 22, 2025
d12531d
lora: only remove hooks that we add back
yiyixuxu Jun 22, 2025
19545fd
update components manager __repr__
yiyixuxu Jun 22, 2025
78d2454
fix
yiyixuxu Jun 23, 2025
085ade0
add doc (developer guide)
yiyixuxu Jun 23, 2025
42c06e9
update doc
yiyixuxu Jun 23, 2025
1ae591e
update code format
yiyixuxu Jun 23, 2025
bb40443
up
yiyixuxu Jun 23, 2025
7c78fb1
add a overview doc page
yiyixuxu Jun 24, 2025
48e4ff5
update overview
yiyixuxu Jun 24, 2025
e49413d
update doc
yiyixuxu Jun 25, 2025
ffbaa89
move save_pretrained to the correct place
yiyixuxu Jun 25, 2025
cdaaa40
update ComponentSpec.from_component, only update config if it is crea…
yiyixuxu Jun 25, 2025
1c9f0a8
ujpdate toctree
yiyixuxu Jun 25, 2025
174628e
Merge branch 'main' into modular-diffusers
yiyixuxu Jun 25, 2025
c0327e4
update init
yiyixuxu Jun 25, 2025
5917d70
remove lora related changes
yiyixuxu Jun 25, 2025
8c038f0
Update src/diffusers/loaders/lora_base.py
yiyixuxu Jun 25, 2025
cb328d3
Apply suggestions from code review
yiyixuxu Jun 25, 2025
7d2a633
style
yiyixuxu Jun 25, 2025
74b908b
style
yiyixuxu Jun 25, 2025
9530245
correct code format
yiyixuxu Jun 25, 2025
c437ae7
copies
yiyixuxu Jun 25, 2025
f3453f0
copy
yiyixuxu Jun 25, 2025
a82e211
style
yiyixuxu Jun 25, 2025
a33206d
fix
yiyixuxu Jun 25, 2025
75e6238
revert changes in pipelines.stable_diffusion_xl folder, can seperate …
yiyixuxu Jun 25, 2025
129d658
oops, fix
yiyixuxu Jun 25, 2025
da4242d
use diffusers ModelHook, raise a import error for accelerate inside e…
yiyixuxu Jun 26, 2025
ab6d634
style
yiyixuxu Jun 26, 2025
7492e33
fix
yiyixuxu Jun 26, 2025
b92cda2
move quicktour to first page
yiyixuxu Jun 26, 2025
61772f0
updatee a comment
yiyixuxu Jun 26, 2025
9abac85
remove mapping file, move to preeset.py
yiyixuxu Jun 26, 2025
84f4b27
modular_pipeline_presets.py -> modular_blocks_presets.py
yiyixuxu Jun 26, 2025
449f299
move all the sequential pipelines & auto pipelines to the blocks_pres…
yiyixuxu Jun 26, 2025
7608d2e
style
yiyixuxu Jun 26, 2025
f63d62e
intermediates_inputs -> intermediate_inputs; component_manager -> com…
yiyixuxu Jun 27, 2025
655512e
components manager: change get -> search_models; add get_ids, get_com…
yiyixuxu Jun 28, 2025
885a596
blocks -> sub_blocks; will not by default load all; add load_default…
yiyixuxu Jun 28, 2025
b543bcc
docstring blocks -> sub_blocks
yiyixuxu Jun 28, 2025
75540f4
more blocks -> sub_blocks
yiyixuxu Jun 28, 2025
93760b1
InsertableOrderedDict -> InsertableDict
yiyixuxu Jun 28, 2025
9aaec5b
up
yiyixuxu Jun 28, 2025
58dbe0c
finimsh the quickstart!
yiyixuxu Jun 28, 2025
49ea4d1
style
yiyixuxu Jun 28, 2025
92b6b43
add some visuals
yiyixuxu Jun 28, 2025
8c680bc
up
yiyixuxu Jun 28, 2025
fedaa00
Merge branch 'main' into modular-diffusers
yiyixuxu Jun 29, 2025
fdd2bed
2024 -> 2025; fix a circular import
yiyixuxu Jun 29, 2025
3a3441c
start the write your own pipeline block tutorial
yiyixuxu Jun 30, 2025
9fae382
Apply suggestions from code review
yiyixuxu Jun 30, 2025
b43e703
Update docs/source/en/modular_diffusers/write_own_pipeline_block.md
yiyixuxu Jun 30, 2025
c75b88f
up
yiyixuxu Jun 30, 2025
285f877
make InsertableDict importable from modular_pipelines
yiyixuxu Jun 30, 2025
f09b1cc
start the section on sequential pipelines
yiyixuxu Jun 30, 2025
c5849ba
more
yiyixuxu Jun 30, 2025
363737e
add loop sequential blocks
yiyixuxu Jun 30, 2025
bbd9340
up
yiyixuxu Jun 30, 2025
0138e17
remove the get_exeuction_blocks rec from AutoPipelineBlocks repr
yiyixuxu Jun 30, 2025
db4b54c
finish the autopipelines section!
yiyixuxu Jun 30, 2025
abf28d5
update
yiyixuxu Jun 30, 2025
4b12a60
Merge branch 'main' into modular-diffusers
yiyixuxu Jun 30, 2025
f27fbce
more attemp to fix circular import
yiyixuxu Jun 30, 2025
98ea5c9
Merge branch 'modular-diffusers' of github.com:huggingface/diffusers …
yiyixuxu Jun 30, 2025
b5db8aa
developer_guide -> end-to-end guide
yiyixuxu Jul 1, 2025
4543d21
rename quick start- it is really not quick
yiyixuxu Jul 1, 2025
1987c07
update docstree
yiyixuxu Jul 1, 2025
2e20241
up up
yiyixuxu Jul 1, 2025
13fe248
add modularpipelineblocks to be pushtohub mixin
yiyixuxu Jul 1, 2025
8cb5b08
up upup
yiyixuxu Jul 1, 2025
3e46c86
fix links in the doc
yiyixuxu Jul 1, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions docs/source/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -91,6 +91,14 @@
- local: hybrid_inference/api_reference
title: API Reference
title: Hybrid Inference
- sections:
- local: modular_diffusers/getting_started
title: Getting Started
- local: modular_diffusers/write_own_pipeline_block
title: Write your own pipeline block
- local: modular_diffusers/end_to_end_guide
title: End-to-End Developer Guide
title: Modular Diffusers
- sections:
- local: using-diffusers/consisid
title: ConsisID
Expand Down
642 changes: 642 additions & 0 deletions docs/source/en/modular_diffusers/end_to_end_guide.md

Large diffs are not rendered by default.

1,393 changes: 1,393 additions & 0 deletions docs/source/en/modular_diffusers/getting_started.md

Large diffs are not rendered by default.

811 changes: 811 additions & 0 deletions docs/source/en/modular_diffusers/write_own_pipeline_block.md

Large diffs are not rendered by default.

55 changes: 55 additions & 0 deletions src/diffusers/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,9 +34,11 @@

_import_structure = {
"configuration_utils": ["ConfigMixin"],
"guiders": [],
"hooks": [],
"loaders": ["FromOriginalModelMixin"],
"models": [],
"modular_pipelines": [],
"pipelines": [],
"quantizers.quantization_config": [],
"schedulers": [],
Expand Down Expand Up @@ -130,12 +132,26 @@
_import_structure["utils.dummy_pt_objects"] = [name for name in dir(dummy_pt_objects) if not name.startswith("_")]

else:
_import_structure["guiders"].extend(
[
"AdaptiveProjectedGuidance",
"AutoGuidance",
"ClassifierFreeGuidance",
"ClassifierFreeZeroStarGuidance",
"SkipLayerGuidance",
"SmoothedEnergyGuidance",
"TangentialClassifierFreeGuidance",
]
)
_import_structure["hooks"].extend(
[
"FasterCacheConfig",
"HookRegistry",
"LayerSkipConfig",
"PyramidAttentionBroadcastConfig",
"SmoothedEnergyGuidanceConfig",
"apply_faster_cache",
"apply_layer_skip",
"apply_pyramid_attention_broadcast",
]
)
Expand Down Expand Up @@ -219,6 +235,15 @@
"WanVACETransformer3DModel",
]
)
_import_structure["modular_pipelines"].extend(
[
"ComponentsManager",
"ComponentSpec",
"ModularLoader",
"ModularPipeline",
"ModularPipelineBlocks",
]
)
_import_structure["optimization"] = [
"get_constant_schedule",
"get_constant_schedule_with_warmup",
Expand Down Expand Up @@ -331,6 +356,12 @@
]

else:
_import_structure["modular_pipelines"].extend(
[
"StableDiffusionXLAutoBlocks",
"StableDiffusionXLModularLoader",
]
)
_import_structure["pipelines"].extend(
[
"AllegroPipeline",
Expand Down Expand Up @@ -542,6 +573,7 @@
]
)


try:
if not (is_torch_available() and is_transformers_available() and is_opencv_available()):
raise OptionalDependencyNotAvailable()
Expand Down Expand Up @@ -748,11 +780,23 @@
except OptionalDependencyNotAvailable:
from .utils.dummy_pt_objects import * # noqa F403
else:
from .guiders import (
AdaptiveProjectedGuidance,
AutoGuidance,
ClassifierFreeGuidance,
ClassifierFreeZeroStarGuidance,
SkipLayerGuidance,
SmoothedEnergyGuidance,
TangentialClassifierFreeGuidance,
)
from .hooks import (
FasterCacheConfig,
HookRegistry,
LayerSkipConfig,
PyramidAttentionBroadcastConfig,
SmoothedEnergyGuidanceConfig,
apply_faster_cache,
apply_layer_skip,
apply_pyramid_attention_broadcast,
)
from .models import (
Expand Down Expand Up @@ -832,6 +876,13 @@
WanTransformer3DModel,
WanVACETransformer3DModel,
)
from .modular_pipelines import (
ComponentsManager,
ComponentSpec,
ModularLoader,
ModularPipeline,
ModularPipelineBlocks,
)
from .optimization import (
get_constant_schedule,
get_constant_schedule_with_warmup,
Expand Down Expand Up @@ -928,6 +979,10 @@
except OptionalDependencyNotAvailable:
from .utils.dummy_torch_and_transformers_objects import * # noqa F403
else:
from .modular_pipelines import (
StableDiffusionXLAutoBlocks,
StableDiffusionXLModularLoader,
)
from .pipelines import (
AllegroPipeline,
AltDiffusionImg2ImgPipeline,
Expand Down
134 changes: 134 additions & 0 deletions src/diffusers/commands/custom_blocks.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,134 @@
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@DN6 @sayakpaul
I merged in this part of code without reviewing it (I need the save_pretrained() code and it works fine)
can you let me know if you want to keep it in here when we merge this PR or remove &convert it into a new PR

"""
Usage example:
TODO
"""

import ast
import importlib.util
import os
from argparse import ArgumentParser, Namespace
from pathlib import Path

from ..utils import logging
from . import BaseDiffusersCLICommand


EXPECTED_PARENT_CLASSES = ["PipelineBlock"]
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
EXPECTED_PARENT_CLASSES = ["PipelineBlock"]
EXPECTED_PARENT_CLASSES = ["ModularPipelineBlocks"]

CONFIG = "config.json"


def conversion_command_factory(args: Namespace):
return CustomBlocksCommand(args.block_module_name, args.block_class_name)


class CustomBlocksCommand(BaseDiffusersCLICommand):
@staticmethod
def register_subcommand(parser: ArgumentParser):
conversion_parser = parser.add_parser("custom_blocks")
conversion_parser.add_argument(
"--block_module_name",
type=str,
default="block.py",
help="Module filename in which the custom block will be implemented.",
)
conversion_parser.add_argument(
"--block_class_name",
type=str,
default=None,
help="Name of the custom block. If provided None, we will try to infer it.",
)
conversion_parser.set_defaults(func=conversion_command_factory)

def __init__(self, block_module_name: str = "block.py", block_class_name: str = None):
self.logger = logging.get_logger("diffusers-cli/custom_blocks")
self.block_module_name = Path(block_module_name)
self.block_class_name = block_class_name

def run(self):
# determine the block to be saved.
out = self._get_class_names(self.block_module_name)
classes_found = list({cls for cls, _ in out})

if self.block_class_name is not None:
child_class, parent_class = self._choose_block(out, self.block_class_name)
if child_class is None and parent_class is None:
raise ValueError(
"`block_class_name` could not be retrieved. Available classes from "
f"{self.block_module_name}:\n{classes_found}"
)
else:
self.logger.info(
f"Found classes: {classes_found} will be using {classes_found[0]}. "
"If this needs to be changed, re-run the command specifying `block_class_name`."
)
child_class, parent_class = out[0][0], out[0][1]

# dynamically get the custom block and initialize it to call `save_pretrained` in the current directory.
# the user is responsible for running it, so I guess that is safe?
module_name = f"__dynamic__{self.block_module_name.stem}"
spec = importlib.util.spec_from_file_location(module_name, str(self.block_module_name))
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module)
getattr(module, child_class)().save_pretrained(os.getcwd())

# or, we could create it manually.
# automap = self._create_automap(parent_class=parent_class, child_class=child_class)
# with open(CONFIG, "w") as f:
# json.dump(automap, f)
with open("requirements.txt", "w") as f:
f.write("")

def _choose_block(self, candidates, chosen=None):
for cls, base in candidates:
if cls == chosen:
return cls, base
return None, None

def _get_class_names(self, file_path):
source = file_path.read_text(encoding="utf-8")
try:
tree = ast.parse(source, filename=file_path)
except SyntaxError as e:
raise ValueError(f"Could not parse {file_path!r}: {e}") from e

results: list[tuple[str, str]] = []
for node in tree.body:
if not isinstance(node, ast.ClassDef):
continue

# extract all base names for this class
base_names = [bname for b in node.bases if (bname := self._get_base_name(b)) is not None]

# for each allowed base that appears in the class's bases, emit a tuple
for allowed in EXPECTED_PARENT_CLASSES:
if allowed in base_names:
results.append((node.name, allowed))

return results

def _get_base_name(self, node: ast.expr):
if isinstance(node, ast.Name):
return node.id
elif isinstance(node, ast.Attribute):
val = self._get_base_name(node.value)
return f"{val}.{node.attr}" if val else node.attr
return None

def _create_automap(self, parent_class, child_class):
module = str(self.block_module_name).replace(".py", "").rsplit(".", 1)[-1]
auto_map = {f"{parent_class}": f"{module}.{child_class}"}
return {"auto_map": auto_map}
2 changes: 2 additions & 0 deletions src/diffusers/commands/diffusers_cli.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@

from argparse import ArgumentParser

from .custom_blocks import CustomBlocksCommand
from .env import EnvironmentCommand
from .fp16_safetensors import FP16SafetensorsCommand

Expand All @@ -26,6 +27,7 @@ def main():
# Register commands
EnvironmentCommand.register_subcommand(commands_parser)
FP16SafetensorsCommand.register_subcommand(commands_parser)
CustomBlocksCommand.register_subcommand(commands_parser)

# Let's go
args = parser.parse_args()
Expand Down
37 changes: 37 additions & 0 deletions src/diffusers/guiders/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# Copyright 2025 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from typing import Union

from ..utils import is_torch_available


if is_torch_available():
from .adaptive_projected_guidance import AdaptiveProjectedGuidance
from .auto_guidance import AutoGuidance
from .classifier_free_guidance import ClassifierFreeGuidance
from .classifier_free_zero_star_guidance import ClassifierFreeZeroStarGuidance
from .skip_layer_guidance import SkipLayerGuidance
from .smoothed_energy_guidance import SmoothedEnergyGuidance
from .tangential_classifier_free_guidance import TangentialClassifierFreeGuidance

GuiderType = Union[
AdaptiveProjectedGuidance,
AutoGuidance,
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@a-r-r-o-w can we make sure PAG has its own class before we merge?

ClassifierFreeGuidance,
ClassifierFreeZeroStarGuidance,
SkipLayerGuidance,
SmoothedEnergyGuidance,
TangentialClassifierFreeGuidance,
]
Loading