Releases: comfyanonymous/ComfyUI
Releases · comfyanonymous/ComfyUI
v0.3.27
What's Changed
- Support the Hunyuan3Dv2 model.
- Support Wan Control Loras.
- Make the SkipLayerGuidanceDIT node work on WAN.
- [3d] remove unused params by @jtydhr88 in #6931
- ltxv: relax frame_idx divisibility for single frames. by @kvochko in #7146
- Only check frontend package if using default frontend by @huchenlei in #7179
- Fix LoadImageOutput node by @christian-byrne in #7143
- Add ER-SDE sampler by @chaObserv in #7187
- Add unwrap widget value support by @huchenlei in #7197
- Ensure the extra_args in dpmpp sde series by @chaObserv in #7204
- [NodeDef] Add documentation on multi_select input option by @huchenlei in #7212
- Add codeowner for comfy/comfy_types by @huchenlei in #7213
- Add --use-flash-attention flag. by @FeepingCreature in #7223
- Tolerate missing
@torch.library.custom_op
by @FeepingCreature in #7234 - Update frontend to 1.12.9 by @huchenlei in #7236
- Update frontend to 1.12.14 by @christian-byrne in #7244
- Guard the noise term in er_sde by @chaObserv in #7265
- Call unpatch_hooks at the start of ModelPatcher.partially_unload by @Kosinkadink in #7253
- Update frontend to 1.13 by @huchenlei in #7331
- Add backend primitive nodes by @huchenlei in #7328
- Update frontend to 1.14 by @huchenlei in #7343
- Native LotusD Implementation by @thot-experiment in #7125
- support output normal and lineart once by @jtydhr88 in #7290
- [nit] Format error strings by @huchenlei in #7345
New Contributors
- @FeepingCreature made their first contribution in #7223
- @thot-experiment made their first contribution in #7125
Full Changelog: v0.3.26...v0.3.27
v0.3.26
What's Changed
- Support "fixed" HunyuanVideo i2v model (actually a model with a different architecture from the original released version).
- Use fp16 as the default compute dtype for the WAN 2.1 models.
- Support fp8_scaled model files that don't enable the fp8 matrix mult by default.
- fixed: Incorrect guide message for missing frontend. by @ltdrdata in #7105
- Typo in node_typing.py by @JettHu in #7092
- Update frontend to 1.11.8 by @huchenlei in #7119
- Weight Hooks Switching Optimization by @Kosinkadink in #7067
- Fix stable cascade VAE issues with lowvram.
Full Changelog: v0.3.24...v0.3.26
v0.3.25
What's Changed
- Support "fixed" HunyuanVideo i2v model (actually a model with a different architecture from the original released version).
- Use fp16 as the default compute dtype for the WAN 2.1 models.
- Support fp8_scaled model files that don't enable the fp8 matrix mult by default.
- fixed: Incorrect guide message for missing frontend. by @ltdrdata in #7105
- Typo in node_typing.py by @JettHu in #7092
- Update frontend to 1.11.8 by @huchenlei in #7119
- Fix stable cascade VAE issues with lowvram.
Full Changelog: v0.3.24...v0.3.25
v0.3.24
- Fix regression when using incompatible embeddings.
Full Changelog: v0.3.23...v0.3.24
v0.3.23
What's Changed
- Support HunyuanVideo Image to Video model.
- [NodeDef] Explicitly add control_after_generate to seed/noise_seed by @huchenlei in #7059
- Better argument handling of front-end-root by @silveroxides in #7043
- Add type hint for FileLocator by @huchenlei in #6968
Full Changelog: v0.3.22...v0.3.23
v0.3.22
What's Changed
Full Changelog: v0.3.21...v0.3.22
v0.3.21
- Fix Issue where the LTXV 0.9.5 img2vid workflow didn't work with some image resolutions.
Full Changelog: v0.3.20...v0.3.21
v0.3.20
What's Changed
- Support LTXV 0.9.5
- Improve: Provide a better guide message for the missing frontend message. by @ltdrdata in #7079
Full Changelog: v0.3.19...v0.3.20
v0.3.19
What's Changed
- Temporal Area Composition: ConditioningSetAreaPercentageVideo
- enabling fp16 accumulations now prioritizes loading models in fp16 when possible.
- Use enum list for --fast options by @huchenlei in #7024
- Use comfyui_frontend_package pypi package to manage frontend dependency (Frontend v1.10.17) by @huchenlei in #7021
- improved: better frontend package installation guide by @ltdrdata in #7047
Full Changelog: v0.3.18...v0.3.19
v0.3.18
What's Changed
- Improve Wan performance by allowing batching and fix issue with long prompts.
- Support Cambricon MLU by @BiologicalExplosion in #6964
New Contributors
- @BiologicalExplosion made their first contribution in #6964
Full Changelog: v0.3.17...v0.3.18