Skip to content

Add SEGA for FLUX #3

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 709 commits into
base: sega-dits
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
709 commits
Select commit Hold shift + click to select a range
fc4229a
Add `remote_decode` to `remote_utils` (#10898)
hlky Mar 2, 2025
54043c3
Update VAE Decode endpoints (#10939)
hlky Mar 2, 2025
4aaa0d2
[chore] fix-copies to flux pipelines (#10941)
sayakpaul Mar 3, 2025
7513162
[Tests] Remove more encode prompts tests (#10942)
sayakpaul Mar 3, 2025
5e3b7d2
Add EasyAnimateV5.1 text-to-video, image-to-video, control-to-video g…
bubbliiiing Mar 3, 2025
9e910c4
Fix SD2.X clip single file load projection_dim (#10770)
Teriks Mar 3, 2025
c9a219b
add from_single_file to animatediff (#10924)
Mar 3, 2025
982f9b3
Add Example of IPAdapterScaleCutoffCallback to Docs (#10934)
ParagEkbote Mar 3, 2025
f92e599
Update pipeline_cogview4.py (#10944)
zRzRzRzRzRzRzR Mar 3, 2025
8f15be1
Fix redundant prev_output_channel assignment in UNet2DModel (#10945)
ahmedbelgacem Mar 3, 2025
30cef6b
Improve load_ip_adapter RAM Usage (#10948)
CyberVy Mar 4, 2025
7855ac5
[tests] make tests device-agnostic (part 4) (#10508)
faaany Mar 4, 2025
cc22058
Update evaluation.md (#10938)
sayakpaul Mar 4, 2025
97fda1b
[LoRA] feat: support non-diffusers lumina2 LoRAs. (#10909)
sayakpaul Mar 4, 2025
11d8e3c
[Quantization] support pass MappingType for TorchAoConfig (#10927)
a120092009 Mar 4, 2025
dcd77ce
Fix the missing parentheses when calling is_torchao_available in quan…
CyberVy Mar 4, 2025
3ee899f
[LoRA] Support Wan (#10943)
a-r-r-o-w Mar 4, 2025
b8215b1
Fix incorrect seed initialization when args.seed is 0 (#10964)
azolotenkov Mar 4, 2025
66bf7ea
feat: add Mixture-of-Diffusers ControlNet Tile upscaler Pipeline for …
elismasilva Mar 4, 2025
a74f02f
[Docs] CogView4 comment fix (#10957)
zRzRzRzRzRzRzR Mar 4, 2025
24c062a
update check_input for cogview4 (#10966)
yiyixuxu Mar 4, 2025
08f74a8
Add VAE Decode endpoint slow test (#10946)
hlky Mar 5, 2025
e031caf
[flux lora training] fix t5 training bug (#10845)
linoytsaban Mar 5, 2025
fbf6b85
use style bot GH Action from `huggingface_hub` (#10970)
hanouticelina Mar 5, 2025
37b8edf
[train_dreambooth_lora.py] Fix the LR Schedulers when `num_train_epoc…
flyxiv Mar 6, 2025
6e2a93d
[tests] fix tests for save load components (#10977)
sayakpaul Mar 6, 2025
b150276
Fix loading OneTrainer Flux LoRA (#10978)
hlky Mar 6, 2025
ea81a42
fix default values of Flux guidance_scale in docstrings (#10982)
catwell Mar 6, 2025
1be0202
[CI] remove synchornized. (#10980)
sayakpaul Mar 6, 2025
f103993
Bump jinja2 from 3.1.5 to 3.1.6 in /examples/research_projects/realfi…
dependabot[bot] Mar 6, 2025
54ab475
Fix Flux Controlnet Pipeline _callback_tensor_inputs Missing Some Ele…
CyberVy Mar 6, 2025
790a909
[Single File] Add user agent to SF download requests. (#10979)
DN6 Mar 6, 2025
748cb0f
Add CogVideoX DDIM Inversion to Community Pipelines (#10956)
LittleNyima Mar 6, 2025
d55f411
fix wan i2v pipeline bugs (#10975)
yupeng1111 Mar 7, 2025
2e5203b
Hunyuan I2V (#10983)
a-r-r-o-w Mar 7, 2025
6a0137e
Fix Graph Breaks When Compiling CogView4 (#10959)
chengzeyi Mar 7, 2025
363d1ab
Wan VAE move scaling to pipeline (#10998)
hlky Mar 7, 2025
a2d3d6a
[LoRA] remove full key prefix from peft. (#11004)
sayakpaul Mar 7, 2025
1357931
[Single File] Add single file support for Wan T2V/I2V (#10991)
DN6 Mar 7, 2025
b38450d
Add STG to community pipelines (#10960)
kinam0252 Mar 7, 2025
1fddee2
[LoRA] Improve copied from comments in the LoRA loader classes (#10995)
sayakpaul Mar 8, 2025
9a1810f
Fix for fetching variants only (#10646)
DN6 Mar 10, 2025
f5edaa7
[Quantization] Add Quanto backend (#10756)
DN6 Mar 10, 2025
0703ce8
[Single File] Add single file loading for SANA Transformer (#10947)
ishan-modi Mar 10, 2025
26149c0
[LoRA] Improve warning messages when LoRA loading becomes a no-op (#1…
sayakpaul Mar 10, 2025
8eefed6
[LoRA] CogView4 (#10981)
a-r-r-o-w Mar 10, 2025
e7e6d85
[Tests] improve quantization tests by additionally measuring the infe…
sayakpaul Mar 10, 2025
b88fef4
[`Research Project`] Add AnyText: Multilingual Visual Text Generation…
tolgacangoz Mar 10, 2025
9add071
[Quantization] Allow loading TorchAO serialized Tensor objects with t…
DN6 Mar 11, 2025
4e3ddd5
fix: mixture tiling sdxl pipeline - adjust gerating time_ids & embedd…
elismasilva Mar 11, 2025
e4b056f
[LoRA] support wan i2v loras from the world. (#11025)
sayakpaul Mar 11, 2025
7e0db46
Fix SD3 IPAdapter feature extractor (#11027)
hlky Mar 11, 2025
36d0553
chore: fix help messages in advanced diffusion examples (#10923)
wonderfan Mar 11, 2025
d87ce2c
Fix missing **kwargs in lora_pipeline.py (#11011)
CyberVy Mar 11, 2025
e7ffeae
Fix for multi-GPU WAN inference (#10997)
AmericanPresidentJimmyCarter Mar 11, 2025
5428046
[Refactor] Clean up import utils boilerplate (#11026)
DN6 Mar 12, 2025
8b4f8ba
Use `output_size` in `repeat_interleave` (#11030)
hlky Mar 12, 2025
733b44a
[hybrid inference 🍯🐝] Add VAE encode (#11017)
hlky Mar 12, 2025
4ea9f89
Wan Pipeline scaling fix, type hint warning, multi generator fix (#11…
hlky Mar 12, 2025
20e4b6a
[LoRA] change to warning from info when notifying the users about a L…
sayakpaul Mar 12, 2025
5551506
Rename Lumina(2)Text2ImgPipeline -> Lumina(2)Pipeline (#10827)
hlky Mar 13, 2025
5e48cd2
making ```formatted_images``` initialization compact (#10801)
YanivDorGalron Mar 13, 2025
ccc8321
Fix aclnnRepeatInterleaveIntWithDim error on NPU for get_1d_rotary_po…
ZhengKai91 Mar 13, 2025
2f0f281
[Tests] restrict memory tests for quanto for certain schemes. (#11052)
sayakpaul Mar 14, 2025
124ac3e
[LoRA] feat: support non-diffusers wan t2v loras. (#11059)
sayakpaul Mar 14, 2025
8ead643
[examples/controlnet/train_controlnet_sd3.py] Fixes #11050 - Cast pro…
andjoer Mar 14, 2025
6b9a333
reverts accidental change that removes attn_mask in attn. Improves fl…
entrpn Mar 14, 2025
be54a95
Fix deterministic issue when getting pipeline dtype and device (#10696)
dimitribarbot Mar 15, 2025
cc19726
[Tests] add requires peft decorator. (#11037)
sayakpaul Mar 15, 2025
82188ce
CogView4 Control Block (#10809)
zRzRzRzRzRzRzR Mar 15, 2025
1001425
[CI] pin transformers version for benchmarking. (#11067)
sayakpaul Mar 16, 2025
33d10af
Fix Wan I2V Quality (#11087)
chengzeyi Mar 17, 2025
2e83cbb
LTX 0.9.5 (#10968)
a-r-r-o-w Mar 18, 2025
b4d7e9c
make PR GPU tests conditioned on styling. (#11099)
sayakpaul Mar 18, 2025
813d42c
Group offloading improvements (#11094)
a-r-r-o-w Mar 18, 2025
3fe3bc0
Fix pipeline_flux_controlnet.py (#11095)
co63oc Mar 18, 2025
2791682
update readme instructions. (#11096)
entrpn Mar 18, 2025
cb1b8b2
Resolve stride mismatch in UNet's ResNet to support Torch DDP (#11098)
jinc7461 Mar 18, 2025
3be6706
Fix Group offloading behaviour when using streams (#11097)
a-r-r-o-w Mar 18, 2025
0ab8fe4
Quality options in `export_to_video` (#11090)
hlky Mar 18, 2025
ae14612
[CI] uninstall deps properly from pr gpu tests. (#11102)
sayakpaul Mar 19, 2025
fc28791
[BUG] Fix Autoencoderkl train script (#11113)
lavinal712 Mar 19, 2025
a34d97c
[Wan LoRAs] make T2V LoRAs compatible with Wan I2V (#11107)
linoytsaban Mar 19, 2025
56f7400
[tests] enable bnb tests on xpu (#11001)
faaany Mar 19, 2025
dc62e69
[fix bug] PixArt inference_steps=1 (#11079)
lawrence-cj Mar 20, 2025
9f2d5c9
Flux with Remote Encode (#11091)
hlky Mar 20, 2025
15ad97f
[tests] make cuda only tests device-agnostic (#11058)
faaany Mar 20, 2025
2c1ed50
Provide option to reduce CPU RAM usage in Group Offload (#11106)
DN6 Mar 20, 2025
e9fda39
remove F.rms_norm for now (#11126)
yiyixuxu Mar 20, 2025
f424b1b
Notebooks for Community Scripts-8 (#11128)
ParagEkbote Mar 20, 2025
9b2c0a7
fix _callback_tensor_inputs of sd controlnet inpaint pipeline missing…
CyberVy Mar 21, 2025
844221a
[core] FasterCache (#10163)
a-r-r-o-w Mar 21, 2025
8a63aa5
add sana-sprint (#11074)
yiyixuxu Mar 21, 2025
a7d53a5
Don't override `torch_dtype` and don't use when `quantization_config`…
hlky Mar 21, 2025
0213179
Update README and example code for AnyText usage (#11028)
tolgacangoz Mar 23, 2025
1d37f42
Modify the implementation of retrieve_timesteps in CogView4-Control. …
zRzRzRzRzRzRzR Mar 23, 2025
5dbe4f5
[fix SANA-Sprint] (#11142)
lawrence-cj Mar 24, 2025
8907a70
New HunyuanVideo-I2V (#11066)
a-r-r-o-w Mar 24, 2025
7aac77a
[doc] Fix Korean Controlnet Train doc (#11141)
flyxiv Mar 24, 2025
1ddf3f3
Improve information about group offloading and layerwise casting (#11…
a-r-r-o-w Mar 24, 2025
739d6ec
add a timestep scale for sana-sprint teacher model (#11150)
lawrence-cj Mar 25, 2025
7dc52ea
[Quantization] dtype fix for GGUF + fix BnB tests (#11159)
DN6 Mar 26, 2025
de6a88c
Set self._hf_peft_config_loaded to True when LoRA is loaded using `lo…
kentdan3msu Mar 26, 2025
5d970a4
WanI2V encode_image (#11164)
hlky Mar 28, 2025
617c208
[Docs] Update Wan Docs with memory optimizations (#11089)
DN6 Mar 28, 2025
75d7e5c
Fix LatteTransformer3DModel dtype mismatch with enable_temporal_atten…
hlky Mar 29, 2025
2c59af7
Raise warning and round down if Wan num_frames is not 4k + 1 (#11167)
a-r-r-o-w Mar 31, 2025
eb50def
[Docs] Fix environment variables in `installation.md` (#11179)
remarkablemark Mar 31, 2025
d6f4774
Add `latents_mean` and `latents_std` to `SDXLLongPromptWeightingPipel…
hlky Mar 31, 2025
e8fc8b1
Bug fix in LTXImageToVideoPipeline.prepare_latents() when latents is …
kakukakujirori Mar 31, 2025
5a6edac
[tests] no hard-coded cuda (#11186)
faaany Apr 1, 2025
df1d7b0
[WIP] Add Wan Video2Video (#11053)
DN6 Apr 1, 2025
a7f07c1
map BACKEND_RESET_MAX_MEMORY_ALLOCATED to reset_peak_memory_stats on …
yao-matrix Apr 2, 2025
4d5a96e
fix autocast (#11190)
jiqing-feng Apr 2, 2025
be0b7f5
fix: for checking mandatory and optional pipeline components (#11189)
elismasilva Apr 2, 2025
fe2b397
remove unnecessary call to `F.pad` (#10620)
bm-synth Apr 2, 2025
d8c617c
allow models to run with a user-provided dtype map instead of a singl…
hlky Apr 2, 2025
52b460f
[tests] HunyuanDiTControlNetPipeline inference precision issue on XPU…
faaany Apr 2, 2025
da857be
Revert `save_model` in ModelMixin save_pretrained and use safe_serial…
hlky Apr 2, 2025
e5c6027
[docs] `torch_dtype` map (#11194)
hlky Apr 2, 2025
54dac3a
Fix enable_sequential_cpu_offload in CogView4Pipeline (#11195)
hlky Apr 2, 2025
78c2fdc
SchedulerMixin from_pretrained and ConfigMixin Self type annotation (…
hlky Apr 2, 2025
b0ff822
Update import_utils.py (#10329)
Lakshaysharma048 Apr 2, 2025
c97b709
Add CacheMixin to Wan and LTX Transformers (#11187)
DN6 Apr 2, 2025
c4646a3
feat: [Community Pipeline] - FaithDiff Stable Diffusion XL Pipeline (…
elismasilva Apr 2, 2025
d9023a6
[Model Card] standardize advanced diffusion training sdxl lora (#7615)
chiral-carbon Apr 3, 2025
480510a
Change KolorsPipeline LoRA Loader to StableDiffusion (#11198)
BasileLewan Apr 3, 2025
6edb774
Update Style Bot workflow (#11202)
hanouticelina Apr 3, 2025
f10775b
Fixed requests.get function call by adding timeout parameter. (#11156)
kghamilton89 Apr 4, 2025
aabf8ce
Fix Single File loading for LTX VAE (#11200)
DN6 Apr 4, 2025
94f2c48
[feat]Add strength in flux_fill pipeline (denoising strength for flux…
Suprhimp Apr 4, 2025
13e4849
[LTX0.9.5] Refactor `LTXConditionPipeline` for text-only conditioning…
tolgacangoz Apr 4, 2025
41afb66
Add Wan with STG as a community pipeline (#11184)
Ednaordinary Apr 5, 2025
8ad68c1
Add missing MochiEncoder3D.gradient_checkpointing attribute (#11146)
mjkvaak-amd Apr 5, 2025
506f39a
enable 1 case on XPU (#11219)
yao-matrix Apr 7, 2025
5ded26c
ensure dtype match between diffused latents and vae weights (#8391)
heyalexchoi Apr 7, 2025
fc7a867
[docs] MPS update (#11212)
stevhliu Apr 8, 2025
841504b
Add support to pass image embeddings to the WAN I2V pipeline. (#11175)
goiri Apr 8, 2025
fbf61f4
[train_controlnet.py] Fix the LR schedulers when num_train_epochs is …
Bhavay-2001 Apr 8, 2025
723dbdd
[Training] Better image interpolation in training scripts (#11206)
asomoza Apr 8, 2025
fb54499
[LoRA] Implement hot-swapping of LoRA (#9453)
BenjaminBossan Apr 8, 2025
c51b6bd
introduce compute arch specific expectations and fix test_sd3_img2img…
yao-matrix Apr 8, 2025
71f34fc
[Flux LoRA] fix issues in flux lora scripts (#11111)
linoytsaban Apr 8, 2025
5d49b3e
Flux quantized with lora (#10990)
hlky Apr 8, 2025
4b27c4a
[feat] implement `record_stream` when using CUDA streams during group…
sayakpaul Apr 8, 2025
1a04812
[bistandbytes] improve replacement warnings for bnb (#11132)
sayakpaul Apr 8, 2025
b924251
minor update to sana sprint docs. (#11236)
sayakpaul Apr 9, 2025
f685981
[docs] minor updates to dtype map docs. (#11237)
sayakpaul Apr 9, 2025
6bfacf0
[LoRA] support more comyui loras for Flux 🚨 (#10985)
sayakpaul Apr 9, 2025
fd02aad
fix: SD3 ControlNet validation so that it runs on a A100. (#11238)
sayakpaul Apr 9, 2025
9ee3dd3
AudioLDM2 Fixes (#11244)
hlky Apr 9, 2025
437cb36
AutoModel (#11115)
hlky Apr 9, 2025
c36c745
fix FluxReduxSlowTests::test_flux_redux_inference case failure on XPU…
yao-matrix Apr 9, 2025
552cd32
[docs] AutoModel (#11250)
hlky Apr 9, 2025
edc154d
Update Ruff to latest Version (#10919)
DN6 Apr 9, 2025
6a7c2d0
fix flux controlnet bug (#11152)
free001style Apr 9, 2025
d1387ec
fix timeout constant (#11252)
sayakpaul Apr 9, 2025
5b27f8a
fix consisid imports (#11254)
sayakpaul Apr 9, 2025
0706786
fix wan ftfy import (#11262)
yiyixuxu Apr 9, 2025
ffda873
[LoRA] support musubi wan loras. (#11243)
sayakpaul Apr 10, 2025
68663f8
fix test_vanilla_funetuning failure on XPU and A100 (#11263)
yao-matrix Apr 10, 2025
77b4f66
make test_stable_diffusion_inpaint_fp16 pass on XPU (#11264)
yao-matrix Apr 10, 2025
450dc48
make test_dict_tuple_outputs_equivalent pass on XPU (#11265)
yao-matrix Apr 10, 2025
0efdf41
add onnxruntime-qnn & onnxruntime-cann (#11269)
xieofxie Apr 10, 2025
31c4f24
make test_instant_style_multiple_masks pass on XPU (#11266)
yao-matrix Apr 10, 2025
e121d0e
[BUG] Fix convert_vae_pt_to_diffusers bug (#11078)
lavinal712 Apr 10, 2025
b8093e6
Fix LTX 0.9.5 single file (#11271)
hlky Apr 10, 2025
ea5a6a8
[Tests] Cleanup lora tests utils (#11276)
sayakpaul Apr 10, 2025
511d738
[CI] relax tolerance for unclip further (#11268)
sayakpaul Apr 11, 2025
7054a34
do not use `DIFFUSERS_REQUEST_TIMEOUT` for notification bot (#11273)
sayakpaul Apr 11, 2025
bc26105
Fix incorrect tile_latent_min_width calculation in AutoencoderKLMochi…
kuantuna Apr 11, 2025
0ef2935
HiDream Image (#11231)
hlky Apr 11, 2025
ec0b2b3
flow matching lcm scheduler (#11170)
quickjkee Apr 12, 2025
ed41db8
Update autoencoderkl_allegro.md (#11303)
Forbu Apr 13, 2025
97e0ef4
Hidream refactoring follow ups (#11299)
a-r-r-o-w Apr 13, 2025
36538e1
Fix incorrect tile_latent_min_width calculations (#11305)
kuantuna Apr 13, 2025
f1f38ff
[ControlNet] Adds controlnet for SanaTransformer (#11040)
ishan-modi Apr 13, 2025
aa541b9
make KandinskyV22PipelineInpaintCombinedFastTests::test_float16_infer…
yao-matrix Apr 14, 2025
fa1ac50
make test_stable_diffusion_karras_sigmas pass on XPU (#11310)
yao-matrix Apr 14, 2025
c7f2d23
make `KolorsPipelineFastTests::test_inference_batch_single_identical`…
faaany Apr 14, 2025
a8f5134
[LoRA] support more SDXL loras. (#11292)
sayakpaul Apr 14, 2025
ba6008a
[HiDream] code example (#11317)
linoytsaban Apr 14, 2025
1cb73cb
import for FlowMatchLCMScheduler (#11318)
asomoza Apr 14, 2025
dcf836c
Use float32 on mps or npu in transformer_hidream_image's rope (#11316)
hlky Apr 14, 2025
8819cda
Add `skrample` section to `community_projects.md` (#11319)
Beinsezii Apr 14, 2025
cefa28f
[docs] Promote `AutoModel` usage (#11300)
sayakpaul Apr 15, 2025
9352a5c
[LoRA] Add LoRA support to AuraFlow (#10216)
hameerabbasi Apr 15, 2025
6e80d24
Fix vae.Decoder prev_output_channel (#11280)
hlky Apr 15, 2025
7edace9
fix CPU offloading related fail cases on XPU (#11288)
yao-matrix Apr 15, 2025
7ecfe29
[docs] fix hidream docstrings. (#11325)
sayakpaul Apr 15, 2025
b6156aa
Rewrite AuraFlowPatchEmbed.pe_selection_index_based_on_dim to be torc…
AstraliteHeart Apr 15, 2025
4b868f1
post release 0.33.0 (#11255)
sayakpaul Apr 15, 2025
d3b2699
another fix for FlowMatchLCMScheduler forgotten import (#11330)
asomoza Apr 15, 2025
b316104
Fix Hunyuan I2V for `transformers>4.47.1` (#11293)
DN6 Apr 16, 2025
3252d7a
unpin torch versions for onnx Dockerfile (#11290)
sayakpaul Apr 16, 2025
7212f35
[single file] enable telemetry for single file loading when using GGU…
sayakpaul Apr 16, 2025
ce1063a
[docs] add a snippet for compilation in the auraflow docs. (#11327)
sayakpaul Apr 16, 2025
59f1b7b
Hunyuan I2V fast tests fix (#11341)
DN6 Apr 16, 2025
d63e6fc
[BUG] fixed _toctree.yml alphabetical ordering (#11277)
ishan-modi Apr 16, 2025
3e59d53
Fix wrong dtype argument name as torch_dtype (#11346)
nPeppon Apr 16, 2025
efc9d68
[chore] fix lora docs utils (#11338)
sayakpaul Apr 17, 2025
b00a564
[docs] add note about use_duck_shape in auraflow docs. (#11348)
sayakpaul Apr 17, 2025
29d2afb
[LoRA] Propagate `hotswap` better (#11333)
sayakpaul Apr 17, 2025
0567932
[Hi Dream] follow-up (#11296)
yiyixuxu Apr 17, 2025
4397f59
[bitsandbytes] improve dtype mismatch handling for bnb + lora. (#11270)
sayakpaul Apr 17, 2025
ee6ad51
Update controlnet_flux.py (#11350)
haofanwang Apr 17, 2025
eef3d65
enable 2 test cases on XPU (#11332)
yao-matrix Apr 17, 2025
bbd0c16
[BNB] Fix test_moving_to_cpu_throws_warning (#11356)
SunMarc Apr 18, 2025
0021bfa
support Wan-FLF2V (#11353)
yiyixuxu Apr 18, 2025
ef47726
Fix: `StableDiffusionXLControlNetAdapterInpaintPipeline` incorrectly …
Kazuki-Yoda Apr 18, 2025
5a2e0f7
update output for Hidream transformer (#11366)
yiyixuxu Apr 19, 2025
5873377
[Wan2.1-FLF2V] update conversion script (#11365)
yiyixuxu Apr 19, 2025
44eeba0
[Flux LoRAs] fix lr scheduler bug in distributed scenarios (#11242)
linoytsaban Apr 21, 2025
0dec414
[train_dreambooth_lora_sdxl.py] Fix the LR Schedulers when num_train_…
kghamilton89 Apr 21, 2025
7a4a126
fix issue that training flux controlnet was unstable and validation r…
PromeAIpro Apr 21, 2025
e7f3a73
Fix Wan I2V prepare_latents dtype (#11371)
a-r-r-o-w Apr 21, 2025
79ea8eb
[BUG] fixes in kadinsky pipeline (#11080)
ishan-modi Apr 21, 2025
aff574f
Add Serialized Type Name kwarg in Model Output (#10502)
anzr299 Apr 21, 2025
0434db9
[cogview4][feat] Support attention mechanism with variable-length sup…
OleehyO Apr 21, 2025
a00c73a
Support different-length pos/neg prompts for FLUX.1-schnell variants …
josephrocca Apr 21, 2025
f59df3b
[Refactor] Minor Improvement for import utils (#11161)
ishan-modi Apr 21, 2025
6ab62c7
Add stochastic sampling to FlowMatchEulerDiscreteScheduler (#11369)
apolinario Apr 22, 2025
e30d3bf
[LoRA] add LoRA support to HiDream and fine-tuning script (#11281)
linoytsaban Apr 22, 2025
f108ad8
Update modeling imports (#11129)
a-r-r-o-w Apr 22, 2025
448c72a
[HiDream] move deprecation to 0.35.0 (#11384)
yiyixuxu Apr 22, 2025
026507c
Update README_hidream.md (#11386)
AMEERAZAM08 Apr 23, 2025
6cef71d
Fix group offloading with block_level and use_stream=True (#11375)
a-r-r-o-w Apr 23, 2025
4b60f4b
[train_dreambooth_flux] Add LANCZOS as the default interpolation mode…
ishandutta0098 Apr 23, 2025
a4f9c3c
[Feature] Added Xlab Controlnet support (#11249)
ishan-modi Apr 23, 2025
b4be422
Kolors additional pipelines, community contrib (#11372)
Teriks Apr 23, 2025
edd7880
[HiDream LoRA] optimizations + small updates (#11381)
linoytsaban Apr 24, 2025
7986834
Fix Flux IP adapter argument in the pipeline example (#11402)
AeroDEmi Apr 24, 2025
e8312e7
[BUG] fixed WAN docstring (#11226)
ishan-modi Apr 24, 2025
f00a995
Fix typos in strings and comments (#11407)
co63oc Apr 24, 2025
bd96a08
[train_dreambooth_lora.py] Set LANCZOS as default interpolation mode …
merterbak Apr 26, 2025
aa5f5d4
[tests] add tests to check for graph breaks, recompilation, cuda sync…
sayakpaul Apr 28, 2025
9ce89e2
enable group_offload cases and quanto cases on XPU (#11405)
yao-matrix Apr 28, 2025
a7e9f85
enable test_layerwise_casting_memory cases on XPU (#11406)
yao-matrix Apr 28, 2025
0e3f271
[tests] fix import. (#11434)
sayakpaul Apr 28, 2025
b3b04fe
[train_text_to_image] Better image interpolation in training scripts …
tongyu0924 Apr 28, 2025
3da98e7
[train_text_to_image_lora] Better image interpolation in training scr…
tongyu0924 Apr 28, 2025
7567adf
enable 28 GGUF test cases on XPU (#11404)
yao-matrix Apr 28, 2025
0ac1d5b
[Hi-Dream LoRA] fix bug in validation (#11439)
linoytsaban Apr 28, 2025
4a9ab65
Fixing missing provider options argument (#11397)
urpetkov-amd Apr 28, 2025
58431f1
Set LANCZOS as the default interpolation for image resizing in Contro…
YoulunPeng Apr 29, 2025
8fe5a14
Raise warning instead of error for block offloading with streams (#11…
a-r-r-o-w Apr 30, 2025
60892c5
enable marigold_intrinsics cases on XPU (#11445)
yao-matrix Apr 30, 2025
c865115
`torch.compile` fullgraph compatibility for Hunyuan Video (#11457)
a-r-r-o-w Apr 30, 2025
fbe2fe5
enable consistency test cases on XPU, all passed (#11446)
yao-matrix Apr 30, 2025
35fada4
enable unidiffuser test cases on xpu (#11444)
yao-matrix Apr 30, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
38 changes: 38 additions & 0 deletions .github/ISSUE_TEMPLATE/remote-vae-pilot-feedback.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
name: "\U0001F31F Remote VAE"
description: Feedback for remote VAE pilot
labels: [ "Remote VAE" ]

body:
- type: textarea
id: positive
validations:
required: true
attributes:
label: Did you like the remote VAE solution?
description: |
If you liked it, we would appreciate it if you could elaborate what you liked.

- type: textarea
id: feedback
validations:
required: true
attributes:
label: What can be improved about the current solution?
description: |
Let us know the things you would like to see improved. Note that we will work optimizing the solution once the pilot is over and we have usage.

- type: textarea
id: others
validations:
required: true
attributes:
label: What other VAEs you would like to see if the pilot goes well?
description: |
Provide a list of the VAEs you would like to see in the future if the pilot goes well.

- type: textarea
id: additional-info
attributes:
label: Notify the members of the team
description: |
Tag the following folks when submitting this feedback: @hlky @sayakpaul
1 change: 1 addition & 0 deletions .github/workflows/benchmark.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,7 @@ jobs:
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
python -m uv pip install pandas peft
python -m uv pip uninstall transformers && python -m uv pip install transformers==4.48.0
- name: Environment
run: |
python utils/print_env.py
Expand Down
3 changes: 2 additions & 1 deletion .github/workflows/build_docker_images.yml
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ jobs:
id: file_changes
uses: jitterbit/get-changed-files@v1
with:
format: 'space-delimited'
format: "space-delimited"
token: ${{ secrets.GITHUB_TOKEN }}

- name: Build Changed Docker Images
Expand Down Expand Up @@ -67,6 +67,7 @@ jobs:
- diffusers-pytorch-cuda
- diffusers-pytorch-compile-cuda
- diffusers-pytorch-xformers-cuda
- diffusers-pytorch-minimum-cuda
- diffusers-flax-cpu
- diffusers-flax-tpu
- diffusers-onnxruntime-cpu
Expand Down
188 changes: 183 additions & 5 deletions .github/workflows/nightly_tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -180,6 +180,55 @@ jobs:
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

run_torch_compile_tests:
name: PyTorch Compile CUDA tests

runs-on:
group: aws-g4dn-2xlarge

container:
image: diffusers/diffusers-pytorch-compile-cuda
options: --gpus 0 --shm-size "16gb" --ipc host

steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2

- name: NVIDIA-SMI
run: |
nvidia-smi
- name: Install dependencies
run: |
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test,training]
- name: Environment
run: |
python utils/print_env.py
- name: Run torch compile tests on GPU
env:
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
RUN_COMPILE: yes
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile -s -v -k "compile" --make-reports=tests_torch_compile_cuda tests/
- name: Failure short reports
if: ${{ failure() }}
run: cat reports/tests_torch_compile_cuda_failures_short.txt

- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v4
with:
name: torch_compile_test_reports
path: reports

- name: Generate Report and Notify Channel
if: always()
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

run_big_gpu_torch_tests:
name: Torch tests on big GPU
strategy:
Expand Down Expand Up @@ -235,15 +284,73 @@ jobs:
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

torch_minimum_version_cuda_tests:
name: Torch Minimum Version CUDA Tests
runs-on:
group: aws-g4dn-2xlarge
container:
image: diffusers/diffusers-pytorch-minimum-cuda
options: --shm-size "16gb" --ipc host --gpus 0
defaults:
run:
shell: bash
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2

- name: Install dependencies
run: |
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
python -m uv pip install peft@git+https://github.com/huggingface/peft.git
pip uninstall accelerate -y && python -m uv pip install -U accelerate@git+https://github.com/huggingface/accelerate.git

- name: Environment
run: |
python utils/print_env.py

- name: Run PyTorch CUDA tests
env:
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
CUBLAS_WORKSPACE_CONFIG: :16:8
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
-s -v -k "not Flax and not Onnx" \
--make-reports=tests_torch_minimum_version_cuda \
tests/models/test_modeling_common.py \
tests/pipelines/test_pipelines_common.py \
tests/pipelines/test_pipeline_utils.py \
tests/pipelines/test_pipelines.py \
tests/pipelines/test_pipelines_auto.py \
tests/schedulers/test_schedulers.py \
tests/others

- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_torch_minimum_version_cuda_stats.txt
cat reports/tests_torch_minimum_version_cuda_failures_short.txt

- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v4
with:
name: torch_minimum_version_cuda_test_reports
path: reports

run_flax_tpu_tests:
name: Nightly Flax TPU Tests
runs-on: docker-tpu
runs-on:
group: gcp-ct5lp-hightpu-8t
if: github.event_name == 'schedule'

container:
image: diffusers/diffusers-flax-tpu
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --privileged
options: --shm-size "16gb" --ipc host --privileged ${{ vars.V5_LITEPOD_8_ENV}} -v /mnt/hf_cache:/mnt/hf_cache
defaults:
run:
shell: bash
Expand Down Expand Up @@ -347,6 +454,77 @@ jobs:
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

run_nightly_quantization_tests:
name: Torch quantization nightly tests
strategy:
fail-fast: false
max-parallel: 2
matrix:
config:
- backend: "bitsandbytes"
test_location: "bnb"
additional_deps: ["peft"]
- backend: "gguf"
test_location: "gguf"
additional_deps: ["peft"]
- backend: "torchao"
test_location: "torchao"
additional_deps: []
- backend: "optimum_quanto"
test_location: "quanto"
additional_deps: []
runs-on:
group: aws-g6e-xlarge-plus
container:
image: diffusers/diffusers-pytorch-cuda
options: --shm-size "20gb" --ipc host --gpus 0
steps:
- name: Checkout diffusers
uses: actions/checkout@v3
with:
fetch-depth: 2
- name: NVIDIA-SMI
run: nvidia-smi
- name: Install dependencies
run: |
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
python -m uv pip install -e [quality,test]
python -m uv pip install -U ${{ matrix.config.backend }}
if [ "${{ join(matrix.config.additional_deps, ' ') }}" != "" ]; then
python -m uv pip install ${{ join(matrix.config.additional_deps, ' ') }}
fi
python -m uv pip install pytest-reportlog
- name: Environment
run: |
python utils/print_env.py
- name: ${{ matrix.config.backend }} quantization tests on GPU
env:
HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
CUBLAS_WORKSPACE_CONFIG: :16:8
BIG_GPU_MEMORY: 40
run: |
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
--make-reports=tests_${{ matrix.config.backend }}_torch_cuda \
--report-log=tests_${{ matrix.config.backend }}_torch_cuda.log \
tests/quantization/${{ matrix.config.test_location }}
- name: Failure short reports
if: ${{ failure() }}
run: |
cat reports/tests_${{ matrix.config.backend }}_torch_cuda_stats.txt
cat reports/tests_${{ matrix.config.backend }}_torch_cuda_failures_short.txt
- name: Test suite reports artifacts
if: ${{ always() }}
uses: actions/upload-artifact@v4
with:
name: torch_cuda_${{ matrix.config.backend }}_reports
path: reports
- name: Generate Report and Notify Channel
if: always()
run: |
pip install slack_sdk tabulate
python utils/log_reports.py >> $GITHUB_STEP_SUMMARY

# M1 runner currently not well supported
# TODO: (Dhruv) add these back when we setup better testing for Apple Silicon
# run_nightly_tests_apple_m1:
Expand Down Expand Up @@ -385,7 +563,7 @@ jobs:
# shell: arch -arch arm64 bash {0}
# env:
# HF_HOME: /System/Volumes/Data/mnt/cache
# HF_TOKEN: ${{ secrets.HF_TOKEN }}
# HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
# run: |
# ${CONDA_RUN} python -m pytest -n 1 -s -v --make-reports=tests_torch_mps \
# --report-log=tests_torch_mps.log \
Expand Down Expand Up @@ -441,7 +619,7 @@ jobs:
# shell: arch -arch arm64 bash {0}
# env:
# HF_HOME: /System/Volumes/Data/mnt/cache
# HF_TOKEN: ${{ secrets.HF_TOKEN }}
# HF_TOKEN: ${{ secrets.DIFFUSERS_HF_HUB_READ_TOKEN }}
# run: |
# ${CONDA_RUN} python -m pytest -n 1 -s -v --make-reports=tests_torch_mps \
# --report-log=tests_torch_mps.log \
Expand All @@ -461,4 +639,4 @@ jobs:
# if: always()
# run: |
# pip install slack_sdk tabulate
# python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
# python utils/log_reports.py >> $GITHUB_STEP_SUMMARY
17 changes: 17 additions & 0 deletions .github/workflows/pr_style_bot.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
name: PR Style Bot

on:
issue_comment:
types: [created]

permissions:
contents: write
pull-requests: write

jobs:
style:
uses: huggingface/huggingface_hub/.github/workflows/style-bot-action.yml@main
with:
python_quality_dependencies: "[quality]"
secrets:
bot_token: ${{ secrets.GITHUB_TOKEN }}
Loading