Skip to content

Conversation

@rmatif
Copy link
Contributor

@rmatif rmatif commented May 9, 2025

This PR adds support for the timestep-shift technique required for inference with NitroFusion models and Diff-Instruct* and other one-step models. It also adds support for two schedulers: SGM Uniform and Simple, because the existing scheduler, for a mysterious reason, fails at step 2 calculation and produces an output similar to step 1.

NitroFusion is one of the best models for single-step inference, making it useful for inference on compute-constrained devices like mobile phones or CPUs.

Example command:

./bin/sd -m nitrosd-realism_f16.gguf -v -p "cute cat" --cfg-scale 1 --steps 1 --timestep-shift 250 -H 1024 -W 1024 --seed 2024 --schedule sgm_uniform

Step NitroSD-Realism (Timestep-shift 250) NitroSD-Vibrant (Timestep-shift 500) Diff-Instruct* (Timestep-shift 400)
Step 1 1step 1step-vibrant 1step-diff-instructstar
Step 2 2step 2step-vibrant Not well supported
Step 3 3step 3step-vibrant Not well supported
Step 4 4step 4step-vibrant Not well supported

The recommended timestep-shift values by the authors are 250 for NitroSD-Realism and 500 for NitroSD-Vibrant and 400 for Diff-Instruct*.

I created GGUF versions of NitroFusion that already include the fixed SDXL VAE, available for download here.

The authors mentioned it's possible to extract LoRA weights from these models and apply them to other checkpoints. I’ll try to do that in the future.

EDIT : Just add Diff-Instruct* GGUF
References:

Timestep-shift implementation: node.py
SGM Uniform: sd_schedulers
Simple: samplers.py

@Green-Sky
Copy link
Contributor

Tested simple in isolation, and it works 👍

denoiser.hpp Outdated

result_sigmas.reserve(n + 1);

int model_sigmas_len = TIMESTEPS;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

trailing space

@Green-Sky
Copy link
Contributor

Please rebase and update this pr on master. It is not nice to test right now.
Also, to aide in mergability, please split the individual features, when possible. (like the simple schedule)

I am currently trying to get sgm to work. :)

@Green-Sky
Copy link
Contributor

fixes #198

@rmatif
Copy link
Contributor Author

rmatif commented Jul 27, 2025

Please rebase and update this pr on master. It is not nice to test right now. Also, to aide in mergability, please split the individual features, when possible. (like the simple schedule)

I am currently trying to get sgm to work. :)

I'll try to do it next week. I think the changes are small enough to be integrated into a single PR, but if that's not preferred, I can make a separate PR for the new schedulers and the timestep shift

@Green-Sky Rebase complete. Can you test it now? hopefully I didn’t break anything

@Green-Sky
Copy link
Contributor

Can confirm SGM Uniform works too.

@Green-Sky
Copy link
Contributor

I saw some model authors recommend the simple schedule for flux models, so I tried it.

Using my custom hyper-flux.1-lite-8b-8step q5_k

discrete simple
hyper_flux_light_euler_discrete hyper_flux_light_euler_simple
hyper_flux_light_euler_discrete hyper_flux_light_euler_simple
hyper_flux_light_euler_discrete hyper_flux_light_euler_simple

Honestly hard to say which one is better, 1 and 3 look better simple to me, but 2 looks better discrete. 🤷

@rmatif
Copy link
Contributor Author

rmatif commented Aug 4, 2025

Honestly hard to say which one is better, 1 and 3 look better simple to me, but 2 looks better discrete. 🤷

Guess it's a matter of taste. I read somewhere a while ago that SGM Uniform performs better with distilled models. Now it's up to @leejet to merge it if everything looks good for him

@Green-Sky
Copy link
Contributor

@leejet the simple scheduler is pretty common now, and required for chroma-flash (+heun), so would really love to see this merged.

@rmatif there where conflicting changes in master, please update the pr with resolutions :)

@rmatif
Copy link
Contributor Author

rmatif commented Sep 14, 2025

@rmatif there where conflicting changes in master, please update the pr with resolutions :)

Sorry, I don’t have much time these days. If @leejet wants to take over and resolve the conflict, that would be great. Otherwise, I’ll try to get back to it when I have some time

@leejet
Copy link
Owner

leejet commented Sep 14, 2025

@rmatif there where conflicting changes in master, please update the pr with resolutions :)

Sorry, I don’t have much time these days. If @leejet wants to take over and resolve the conflict, that would be great. Otherwise, I’ll try to get back to it when I have some time

@rmatif Thank you for your contribution. I'll find some time to resolve the conflicts and merge this PR.

@wbruna
Copy link
Contributor

wbruna commented Sep 14, 2025

@leejet , if I may make a suggestion:

        // Check if the current schedule is SGMUniformSchedule
        if (std::dynamic_pointer_cast<SGMUniformSchedule>(schedule)) {
            std::vector<float> sigs;

This isn't the first time the current class hierarchy and code organization gets in the way of scheduler's algorithms: DDIM and TCD have to undo parts of StableDiffusionGGML::sample to work, and that mismatch also causes bugs like #663 . Maybe these mismatches could be avoided by a different class hierarchy?

@leejet
Copy link
Owner

leejet commented Sep 15, 2025

@leejet , if I may make a suggestion:

        // Check if the current schedule is SGMUniformSchedule
        if (std::dynamic_pointer_cast<SGMUniformSchedule>(schedule)) {
            std::vector<float> sigs;

This isn't the first time the current class hierarchy and code organization gets in the way of scheduler's algorithms: DDIM and TCD have to undo parts of StableDiffusionGGML::sample to work, and that mismatch also causes bugs like #663 . Maybe these mismatches could be avoided by a different class hierarchy?

@wbruna Actually, the sampling-related code does need to be refactored, and I’m planning to do that soon. But in this case, it’s not strictly required by the code structure—I’ve simplified the relevant logic instead, so feel free to take a look.

@wbruna
Copy link
Contributor

wbruna commented Sep 15, 2025

Testing 58225a0 , single-step gens:

Nitro Vibrant (500) Nitro Realism (250) Diff-Instruct* (400)
test_1757971086 test_1757971669 test_1757971769

@leejet
Copy link
Owner

leejet commented Sep 16, 2025

It looks like this PR can be merged now. Thank you everyone!

@leejet leejet merged commit 8376dfb into leejet:master Sep 16, 2025
9 checks passed
stduhpf added a commit to stduhpf/stable-diffusion.cpp that referenced this pull request Oct 23, 2025
* docs: add sd.cpp-webui as an available frontend (leejet#738)

* fix: correct head dim check and L_k padding of flash attention (leejet#736)

* fix: convert f64 to f32 and i64 to i32 when loading weights

* docs: add LocalAI to README's UIs (leejet#741)

* sync: update ggml

* sync: update ggml

* feat: upgrade musa sdk to rc4.2.0 (leejet#732)

* feat: change image dimensions requirement for DiT models (leejet#742)

* feat: add missing models and parameters to image metadata (leejet#743)

* feat: add new scheduler types, clip skip and vae to image embedded params

- If a non default scheduler is set, include it in the 'Sampler' tag in the data
embedded into the final image.
- If a custom VAE path is set, include the vae name (without path and extension)
in embedded image params under a `VAE:` tag.
- If a custom Clip skip is set, include that Clip skip value in embedded image
params under a `Clip skip:` tag.

* feat: add separate diffusion and text models to metadata

---------

Co-authored-by: one-lithe-rune <skapusniak@lithe-runes.com>

* refector: optimize the usage of tensor_types

* feat: support build against system installed GGML library (leejet#749)

* chore: avoid setting GGML_MAX_NAME when building against external ggml (leejet#751)

An external ggml will most likely have been built with the default
GGML_MAX_NAME value (64), which would be inconsistent with the value
set by our build (128). That would be an ODR violation, and it could
easily cause memory corruption issues due to the different
sizeof(struct ggml_tensor) values.

For now, when linking against an external ggml, we demand it has been
patched with a bigger GGML_MAX_NAME, since we can't check against a
value defined only at build time.

* Conv2D direct support (leejet#744)

* Conv2DDirect for VAE stage

* Enable only for Vulkan, reduced duplicated code

* Cmake option to use conv2d direct

* conv2d direct always on for opencl

* conv direct as a flag

* fix merge typo

* Align conv2d behavior to flash attention's

* fix readme

* add conv2d direct for controlnet

* add conv2d direct for esrgan

* clean code, use enable_conv2d_direct/get_all_blocks

* format code

---------

Co-authored-by: leejet <leejet714@gmail.com>

* sync: update ggml, make cuda im2col a little faster

* chore: add Nvidia 30 series (cuda arch 86) to build

* feat: throttle model loading progress updates (leejet#782)

Some terminals have slow display latency, so frequent output
during model loading can actually slow down the process.

Also, since tensor loading times can vary a lot, the progress
display now shows the average across past iterations instead
of just the last one.

* docs: add missing dash to docs/chroma.md (leejet#771)

* docs: add compile option needed by Ninja (leejet#770)

* feat: show usage on unknown arg (leejet#767)

* fix: typo in the verbose long flag (leejet#783)

* feat: add wan2.1/2.2 support (leejet#778)

* add wan vae suppport

* add wan model support

* add umt5 support

* add wan2.1 t2i support

* make flash attn work with wan

* make wan a little faster

* add wan2.1 t2v support

* add wan gguf support

* add offload params to cpu support

* add wan2.1 i2v support

* crop image before resize

* set default fps to 16

* add diff lora support

* fix wan2.1 i2v

* introduce sd_sample_params_t

* add wan2.2 t2v support

* add wan2.2 14B i2v support

* add wan2.2 ti2v support

* add high noise lora support

* sync: update ggml submodule url

* avoid build failure on linux

* avoid build failure

* update ggml

* update ggml

* fix sd_version_is_wan

* update ggml, fix cpu im2col_3d

* fix ggml_nn_attention_ext mask

* add cache support to ggml runner

* fix the issue of illegal memory access

* unify image loading processing

* add wan2.1/2.2 FLF2V support

* fix end_image mask

* update to latest ggml

* add GGUFReader

* update docs

* feat: add support for timestep boundary based automatic expert routing in Wan MoE (leejet#779)

* Wan MoE: Automatic expert routing based on timestep boundary

* unify code style and fix some issues

---------

Co-authored-by: leejet <leejet714@gmail.com>

* feat: add flow shift parameter (for SD3 and Wan) (leejet#780)

* Add flow shift parameter (for SD3 and Wan)

* unify code style and fix some issues

---------

Co-authored-by: leejet <leejet714@gmail.com>

* docs: update docs and help message

* chore: update to c++17

* docs: update docs/wan.md

* fix: add flash attn support check (leejet#803)

* feat: support incrementing ref image index (omni-kontext) (leejet#755)

* kontext: support  ref images indices

* lora: support x_embedder

* update help message

* Support for negative indices

* support for OmniControl (offsets at index 0)

* c++11 compat

* add --increase-ref-index option

* simplify the logic and fix some issues

* update README.md

* remove unused variable

---------

Co-authored-by: leejet <leejet714@gmail.com>

* feat: add detailed tensor loading time stat (leejet#793)

* fix: clarify lora quant support and small fixes (leejet#792)

* fix: accept NULL in sd_img_gen_params_t::input_id_images_path (leejet#809)

* chore: update flash attention warnings (leejet#805)

* fix: use {} for params init instead of memset (leejet#781)

* chore: remove sd3 flash attention warn (leejet#812)

* feat: use log_printf to print ggml logs (leejet#545)

* chore: add install() support in CMakeLists.txt (leejet#540)

* feat: add SmoothStep Scheduler (leejet#813)

* feat: add sd3 flash attn support (leejet#815)

* fix: make tiled VAE reuse the compute buffer (leejet#821)

* feat: reduce CLIP memory usage with no embeddings (leejet#768)

* fix: make weight override more robust against ggml changes (leejet#760)

* fix: do not force VAE type to f32 on SDXL (leejet#716)

This seems to be a leftover from the initial SDXL support: it's
not enough to avoid NaN issues, and it's not not needed for the
fixed sdxl-vae-fp16-fix .

* feat: use Euler sampling by default for SD3 and Flux (leejet#753)

Thank you for your contribution.

* fix: harden for large files (leejet#643)

* feat: Add SYCL Dockerfile (leejet#651)

* feat: increase work_ctx memory buffer size (leejet#814)

* docs: update docs

* feat: add VAE encoding tiling support and adaptive overlap  (leejet#484)

* implement  tiling vae encode support

* Tiling (vae/upscale): adaptative overlap

* Tiling: fix edge case

* Tiling: fix crash when less than 2 tiles per dim

* remove extra dot

* Tiling: fix edge cases for adaptative overlap

* tiling: fix edge case

* set vae tile size via env var

* vae tiling: refactor again, base on smaller buffer for alignment

* Use bigger tiles for encode (to match compute buffer size)

* Fix edge case when tile is bigger than latent

* non-square VAE tiling (#3)

* refactor tile number calculation

* support non-square tiles

* add env var to change tile overlap

* add safeguards and better error messages for SD_TILE_OVERLAP

* add safeguards and include overlapping factor for SD_TILE_SIZE

* avoid rounding issues when specifying SD_TILE_SIZE as a factor

* lower SD_TILE_OVERLAP limit

* zero-init empty output buffer

* Fix decode latent size

* fix encode

* tile size params instead of env

* Tiled vae parameter validation (#6)

* avoid crash with invalid tile sizes, use 0 for default

* refactor default tile size, limit overlap factor

* remove explicit parameter for relative tile size

* limit encoding tile to latent size

* unify code style and format code

* update docs

* fix get_tile_sizes in decode_first_stage

---------

Co-authored-by: Wagner Bruna <wbruna@users.noreply.github.com>
Co-authored-by: leejet <leejet714@gmail.com>

* feat: add vace support (leejet#819)

* add wan vace t2v support

* add --vace-strength option

* add vace i2v support

* fix the processing of vace_context

* add vace v2v support

* update docs

* feat: optimize tensor loading time (leejet#790)

* opt tensor loading

* fix build failure

* revert the changes

* allow the use of n_threads

* fix lora loading

* optimize lora loading

* add mutex

* use atomic

* fix build

* fix potential duplicate issue

* avoid duplicate lookup of lora tensor

* fix progeress bar

* remove unused remove_duplicates

---------

Co-authored-by: leejet <leejet714@gmail.com>

* refactor: simplify the logic of pm id image loading (leejet#827)

* feat: add sgm_uniform scheduler, simple scheduler, and support for NitroFusion (leejet#675)

* feat: Add timestep shift and two new schedulers

* update readme

* fix spaces

* format code

* simplify SGMUniformSchedule

* simplify shifted_timestep logic

* avoid conflict

---------

Co-authored-by: leejet <leejet714@gmail.com>

* refactor: move tiling cacl and debug print into the tiling code branch (leejet#833)

* refactor: simplify DPM++ (2S) Ancestral (leejet#667)

* chore: set release tag by commit count

* chore: fix workflow (leejet#836)

* fix: avoid multithreading issues in the model loader

* fix: avoid segfault for pix2pix models without reference images (leejet#766)

* fix: avoid segfault for pix2pix models with no reference images

* fix: default to empty reference on pix2pix models to avoid segfault

* use resize instead of reserve

* format code

---------

Co-authored-by: leejet <leejet714@gmail.com>

* refactor: remove unused --normalize-input parameter (leejet#835)

* fix: correct tensor deduplication logic (leejet#844)

* docs: include Vulkan compatibility for LoRA quants (leejet#845)

* docs: HipBLAS / ROCm build instruction fix (leejet#843)

* fix: tensor loading thread count (leejet#854)

* fix: optimize the handling of CLIP embedding weight (leejet#840)

* sync: update ggml

* sync: update ggml

* fix: optimize the handling of embedding weight (leejet#859)

* feat: add support for Flux Controls and Flex.2 (leejet#692)

* docs: update README.md (leejet#866)

* chore: fix dockerfile libgomp1 dependency + improvements (leejet#852)

* fix: ensure directory iteration results are sorted by filename (leejet#858)

* chore: fix vulkan ci (leejet#878)

* feat: add support for more esrgan models & x2 & x1 models (leejet#855)

* feat: add a stand-alone upscale mode (leejet#865)

* feat: add a stand-alone upscale mode

* fix prompt option check

* format code

* update README.md

---------

Co-authored-by: leejet <leejet714@gmail.com>

* refactor: deal with default img-cfg-scale at the library level (leejet#869)

* feat: add Qwen Image support (leejet#851)

* add qwen tokenizer

* add qwen2.5 vl support

* mv qwen.hpp -> qwenvl.hpp

* add qwen image model

* add qwen image t2i pipeline

* fix qwen image flash attn

* add qwen image i2i pipline

* change encoding of vocab_qwen.hpp to utf8

* fix get_first_stage_encoding

* apply jeffbolz f32 patch

leejet#851 (comment)

* fix the issue that occurs when using CUDA with k-quants weights

* optimize the handling of the FeedForward precision fix

* to_add_out precision fix

* update docs

* fix: resolve VAE tiling problem in Qwen Image (leejet#873)

* fix: avoid generating black images when running T5 on the GPU (leejet#882)

* fix: correct canny preprocessor (leejet#861)

* fix: better progress display for second-order samplers (leejet#834)

* feat: add Qwen Image Edit support (leejet#877)

* add ref latent support for qwen image

* optimize clip_preprocess and fix get_first_stage_encoding

* add qwen2vl vit support

* add qwen image edit support

* fix qwen image edit pipeline

* add mmproj file support

* support dynamic number of Qwen image transformer blocks

* set prompt_template_encode_start_idx every time

* to_add_out precision fix

* to_out.0 precision fix

* update docs

---------

Co-authored-by: Daniele <57776841+daniandtheweb@users.noreply.github.com>
Co-authored-by: Erik Scholz <Green-Sky@users.noreply.github.com>
Co-authored-by: leejet <leejet714@gmail.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: R0CKSTAR <xiaodong.ye@mthreads.com>
Co-authored-by: stduhpf <stephduh@live.fr>
Co-authored-by: one-lithe-rune <skapusniak@lithe-runes.com>
Co-authored-by: Seas0 <seashkey@gmail.com>
Co-authored-by: NekopenDev <197017459+nekopendev@users.noreply.github.com>
Co-authored-by: SmallAndSoft <45131567+SmallAndSoft@users.noreply.github.com>
Co-authored-by: Markus Hartung <mail@hartmark.se>
Co-authored-by: clibdev <52199778+clibdev@users.noreply.github.com>
Co-authored-by: Richard Palethorpe <io@richiejp.com>
Co-authored-by: rmatif <kingrealriadh@gmail.com>
Co-authored-by: vmobilis <75476228+vmobilis@users.noreply.github.com>
Co-authored-by: Stefan-Olt <stefan-oltmanns@gmx.net>
Co-authored-by: Sharuzzaman Ahmat Raslan <sharuzzaman@gmail.com>
Co-authored-by: Serkan Sahin <14278530+SergeantSerk@users.noreply.github.com>
Co-authored-by: Pedrito <pedro.c.vfx@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants