Skip to content

Conversation

@leejet
Copy link
Owner

@leejet leejet commented Aug 29, 2025

Feature:

  • Wan2.1 T2V 1.3B
  • Wan2.1 T2V 14B
  • Wan2.1 I2V 14B
  • Wan2.2 T2V A14B
  • Wan2.2 I2V A14B
  • Wan2.2 TI2V 5B
  • Wan2.1 FLF2V 14B
  • Wan2.2 FLF2V 14B

TODO:

  • Vace
  • Fun control
  • Reduce the memory usage of WAN VAE

Warning: Currently, only the CUDA and CPU backends support WAN VAE. If you are using another backend, try using --vae-on-cpu to run the WAN VAE on the CPU. Although this will be very slow.

Examples

Since GitHub does not support AVI files, the file I uploaded was converted from AVI to MP4.

Wan2.1 T2V 1.3B

.\bin\Release\sd.exe -M vid_gen --diffusion-model  ..\..\ComfyUI\models\diffusion_models\wan2.1_t2v_1.3B_fp16.safetensors --vae ..\..\ComfyUI\models\vae\wan_2.1_vae.safetensors --t5xxl ..\..\ComfyUI\models\text_encoders\umt5-xxl-encoder-Q8_0.gguf  -p "a lovely cat" --cfg-scale 6.0 --sampling-method euler -v -n "色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部, 畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走" -W 832 -H 480 --diffusion-fa --video-frames 33
Wan2.1_1.3B_t2v.mp4

Wan2.1 T2V 14B

.\bin\Release\sd.exe -M vid_gen --diffusion-model  ..\..\ComfyUI\models\diffusion_models\wan2.1-t2v-14b-Q8_0.gguf --vae ..\..\ComfyUI\models\vae\wan_2.1_vae.safetensors --t5xxl ..\..\ComfyUI\models\text_encoders\umt5-xxl-encoder-Q8_0.gguf  -p "a lovely cat" --cfg-scale 6.0 --sampling-method euler -v -n "色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走" -W 832 -H 480 --diffusion-fa  --offload-to-cpu --video-frames 33
Wan2.1_14B_t2v.mp4

Wan2.1 I2V 14B

.\bin\Release\sd.exe -M vid_gen --diffusion-model  ..\..\ComfyUI\models\diffusion_models\wan2.1-i2v-14b-480p-Q8_0.gguf --vae ..\..\ComfyUI\models\vae\wan_2.1_vae.safetensors --t5xxl ..\..\ComfyUI\models\text_encoders\umt5-xxl-encoder-Q8_0.gguf --clip_vision ..\..\ComfyUI\models\clip_vision\clip_vision_h.safetensors -p "a lovely cat" --cfg-scale 6.0 --sampling-method euler -v -n "色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走" -W 480 -H 832 --diffusion-fa --video-frames 33 --offload-to-cpu -i ..\assets\cat_with_sd_cpp_42.png
Wan2.1_14B_i2v.mp4

Wan2.2 T2V A14B

.\bin\Release\sd.exe -M vid_gen --diffusion-model  ..\..\ComfyUI\models\diffusion_models\Wan2.2-T2V-A14B-LowNoise-Q8_0.gguf --high-noise-diffusion-model  ..\..\ComfyUI\models\diffusion_models\Wan2.2-T2V-A14B-HighNoise-Q8_0.gguf --vae ..\..\ComfyUI\models\vae\wan_2.1_vae.safetensors --t5xxl ..\..\ComfyUI\models\text_encoders\umt5-xxl-encoder-Q8_0.gguf  -p "a lovely cat" --cfg-scale 3.5 --sampling-method euler --steps 10 --high-noise-cfg-scale 3.5 --high-noise-sampling-method euler --high-noise-steps 8 -v -n "色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走" -W 832 -H 480 --diffusion-fa --offload-to-cpu --video-frames 33
Wan2.2_14B_t2v.mp4

Wan2.2 I2V A14B

.\bin\Release\sd.exe -M vid_gen --diffusion-model  ..\..\ComfyUI\models\diffusion_models\Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf --high-noise-diffusion-model  ..\..\ComfyUI\models\diffusion_models\Wan2.2-I2V-A14B-HighNoise-Q8_0.gguf --vae ..\..\ComfyUI\models\vae\wan_2.1_vae.safetensors --t5xxl ..\..\ComfyUI\models\text_encoders\umt5-xxl-encoder-Q8_0.gguf  -p "a lovely cat" --cfg-scale 3.5 --sampling-method euler --steps 10 --high-noise-cfg-scale 3.5 --high-noise-sampling-method euler --high-noise-steps 8 -v -n "色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走" -W 832 -H 480 --diffusion-fa --offload-to-cpu --video-frames 33 --offload-to-cpu -i ..\assets\cat_with_sd_cpp_42.png
Wan2.2_14B_i2v.mp4

Wan2.2 I2V A14B T2I

.\bin\Release\sd.exe -M vid_gen --diffusion-model  ..\..\ComfyUI\models\diffusion_models\Wan2.2-T2V-A14B-LowNoise-Q8_0.gguf --high-noise-diffusion-model  ..\..\ComfyUI\models\diffusion_models\Wan2.2-T2V-A14B-HighNoise-Q8_0.gguf --vae ..\..\ComfyUI\models\vae\wan_2.1_vae.safetensors --t5xxl ..\..\ComfyUI\models\text_encoders\umt5-xxl-encoder-Q8_0.gguf  -p "a lovely cat" --cfg-scale 3.5 --sampling-method euler --steps 10 --high-noise-cfg-scale 3.5 --high-noise-sampling-method euler --high-noise-steps 8 -v -n "色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走" -W 832 -H 480 --diffusion-fa --offload-to-cpu
Wan2 2_14B_t2i

Wan2.2 T2V 14B with Lora

.\bin\Release\sd.exe -M vid_gen --diffusion-model  ..\..\ComfyUI\models\diffusion_models\Wan2.2-T2V-A14B-LowNoise-Q8_0.gguf --high-noise-diffusion-model  ..\..\ComfyUI\models\diffusion_models\Wan2.2-T2V-A14B-HighNoise-Q8_0.gguf --vae ..\..\ComfyUI\models\vae\wan_2.1_vae.safetensors --t5xxl ..\..\ComfyUI\models\text_encoders\umt5-xxl-encoder-Q8_0.gguf  -p "a lovely cat<lora:wan2.2_t2v_lightx2v_4steps_lora_v1.1_low_noise:1><lora:|high_noise|wan2.2_t2v_lightx2v_4steps_lora_v1.1_high_noise:1>" --cfg-scale 3.5 --sampling-method euler --steps 4 --high-noise-cfg-scale 3.5 --high-noise-sampling-method euler --high-noise-steps 4 -v -n "色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走" -W 832 -H 480 --diffusion-fa --offload-to-cpu --lora-model-dir ..\..\ComfyUI\models\loras --video-frames 33
Wan2.2_14B_t2v_lora.mp4

Wan2.2 TI2V 5B

T2V

.\bin\Release\sd.exe -M vid_gen --diffusion-model  ..\..\ComfyUI\models\diffusion_models\wan2.2_ti2v_5B_fp16.safetensors --vae ..\..\ComfyUI\models\vae\wan2.2_vae.safetensors --t5xxl ..\..\ComfyUI\models\text_encoders\umt5-xxl-encoder-Q8_0.gguf  -p "a lovely cat" --cfg-scale 6.0 --sampling-method euler -v -n "色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走" -W 480 -H 832 --diffusion-fa --offload-to-cpu --video-frames 33
Wan2.2_5B_t2v.mp4

I2V

.\bin\Release\sd.exe -M vid_gen --diffusion-model  ..\..\ComfyUI\models\diffusion_models\wan2.2_ti2v_5B_fp16.safetensors --vae ..\..\ComfyUI\models\vae\wan2.2_vae.safetensors --t5xxl ..\..\ComfyUI\models\text_encoders\umt5-xxl-encoder-Q8_0.gguf  -p "a lovely cat" --cfg-scale 6.0 --sampling-method euler -v -n "色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走" -W 480 -H 832 --diffusion-fa --offload-to-cpu --video-frames 33 -i ..\assets\cat_with_sd_cpp_42.png
Wan2.2_5B_i2v.mp4

Wan2.1 FLF2V 14B

.\bin\Release\sd.exe -M vid_gen --diffusion-model  ..\..\ComfyUI\models\diffusion_models\wan2.1-flf2v-14b-720p-Q8_0.gguf --vae ..\..\ComfyUI\models\vae\wan_2.1_vae.safetensors --t5xxl ..\..\ComfyUI\models\text_encoders\umt5-xxl-encoder-Q8_0.gguf --clip_vision ..\..\ComfyUI\models\clip_vision\clip_vision_h.safetensors -p "glass flower blossom" --cfg-scale 6.0 --sampling-method euler -v -n "色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走" -W 480 -H 832 --diffusion-fa --video-frames 33 --offload-to-cpu --init-img ..\..\ComfyUI\input\start_image.png --end-img ..\..\ComfyUI\input\end_image.png
Wan2.1_14B_flf2v.mp4

Wan2.2 FLF2V 14B

.\bin\Release\sd.exe -M vid_gen --diffusion-model  ..\..\ComfyUI\models\diffusion_models\Wan2.2-I2V-A14B-LowNoise-Q8_0.gguf --high-noise-diffusion-model  ..\..\ComfyUI\models\diffusion_models\Wan2.2-I2V-A14B-HighNoise-Q8_0.gguf --vae ..\..\ComfyUI\models\vae\wan_2.1_vae.safetensors --t5xxl ..\..\ComfyUI\models\text_encoders\umt5-xxl-encoder-Q8_0.gguf --cfg-scale 3.5 --sampling-method euler --steps 10 --high-noise-cfg-scale 3.5 --high-noise-sampling-method euler --high-noise-steps 8 -v -p "glass flower blossom" -n "色调艳丽,过曝,静态,细节模糊不清,字幕,风格,作品,画作,画面,静止,整体发灰,最差质量,低质量,JPEG压缩残留,丑陋的,残缺的,多余的手指,画得不好的手部,画得不好的脸部,畸形的,毁容的,形态畸形的肢体,手指融合,静止不动的画面,杂乱的背景,三条腿,背景人很多,倒着走" -W 480 -H 832 --diffusion-fa --video-frames 33 --offload-to-cpu --init-img ..\..\ComfyUI\input\start_image.png --end-img ..\..\ComfyUI\input\end_image.png
Wan2.2_14B_flf2v.mp4

@leejet
Copy link
Owner Author

leejet commented Aug 29, 2025

Finally, Wan's support was added. This took me a long time. Once this PR is merged, I will try to add support for Qwen Image.

@Green-Sky
Copy link
Contributor

Great job @leejet , very nice. I can't wait to try it later.

I see you went for mjpeg+avi, is there also an option to output it as a png image sequence?

@leejet
Copy link
Owner Author

leejet commented Aug 29, 2025

Great job @leejet , very nice. I can't wait to try it later.

I see you went for mjpeg+avi, is there also an option to output it as a png image sequence?

I will add command-line parameters to control it, but the priority is not very high.

@Green-Sky
Copy link
Contributor

Green-Sky commented Sep 2, 2025

@Green-Sky It might not be the problem with the implementation but with the smaller model itself. I think that smaller model's distilling is not done very well, I had a lot of trouble getting consistent result in ComfyUI using that smaller model as well. I had far better changes using a quantized version of the full model.

Hmm. I don't think you can call Wan2.2 TI2V 5B a distilled model. It has its own VAE, that has way more compression than the other VAE.

Wan2.2 open-sources a 5B model built with our advanced Wan2.2-VAE that achieves a compression ratio of 16×16×4.

Also, the same model behaves just fine with text only input.

@stduhpf
Copy link
Contributor

stduhpf commented Sep 2, 2025

Wan2.2 TI2V 5B with image input still seems to be somewhat broken,

I think wan2.2 TI2V kind of sucks in I2V mode in ComfyUI too.

Edit: I tried to match the settings as well as I could in comfy, It's definitely not as bad, but maybe it's just a lucky seed.
ComfyUI_01039_

Edit2: No something's definitely wrong with this PR's implementation, the cat keeps sneezing no matter the seed, and this doesn't happen at all in ComfyUI.

Edit 3: I was using --sed instead of --seed (thank god #767 is merged in master now)

seed 42 seed 0
output output0

@chaserhkj
Copy link

chaserhkj commented Sep 2, 2025 via email

@tyllmoritz
Copy link

When I tried this with the vulkan backend, I had problems with im2col_3d.

For a quick and dirty test, I just reverted the commit "cuda/cpu: add im2col_3d support" in https://github.com/leejet/ggml/tree/wan

There are two better solutions (already implemented by @leejet and @jeffbolznv , thanks for your work):

@leejet
Copy link
Owner Author

leejet commented Sep 6, 2025

Since ggml-org/ggml has already synchronized the PR that I made for adding WAN-related operations, I have decided to merge this PR first. This PR already contains too many changes. Support for VACE and FUN will be in a separate pull request.

@leejet leejet merged commit cb1d975 into master Sep 6, 2025
8 checks passed
Copy link
Contributor

@Green-Sky Green-Sky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry for the late review.

@Green-Sky
Copy link
Contributor

--offload-to-cpu is not part of help too.

@leejet
Copy link
Owner Author

leejet commented Sep 6, 2025

All of these have been fixed. Thank you for your review comments.

@Amin456789
Copy link

@LostRuins please add this to ur gui if possibile. will be great if u add support for lora too

thank u guys for making this, thnx leejet and others

@LostRuins
Copy link
Contributor

LostRuins commented Sep 28, 2025

Hello @leejet , I noticed that the sd_vid_gen_params_t doesn't contain any parameters for toggling VAE tiling - currently does VAE tiling work for WAN videos and is it possible to enable? Thanks!

Edit: Reason is because without VAE tiling currently it's trying to allocated a massive buffer on vulkan that goes OOM.

@LostRuins
Copy link
Contributor

Also can someone help me understand how the flow shift works? Is that what's causing these abrupt transitions and how can I avoid it?
cat

@LostRuins
Copy link
Contributor

LostRuins commented Sep 30, 2025

wtf

still getting really weird results in most gens

@wbruna any ideas?

Final edit: All resolved by switching to wan2.2-rapid-mega-aio-v3

@leejet
Copy link
Owner Author

leejet commented Oct 11, 2025

Hello @leejet , I noticed that the sd_vid_gen_params_t doesn't contain any parameters for toggling VAE tiling - currently does VAE tiling work for WAN videos and is it possible to enable? Thanks!

Edit: Reason is because without VAE tiling currently it's trying to allocated a massive buffer on vulkan that goes OOM.

Currently, WAN VAE does not support video tiling, and I haven’t tested the feasibility of video tiling yet.

@leejet
Copy link
Owner Author

leejet commented Oct 11, 2025

Also can someone help me understand how the flow shift works? Is that what's causing these abrupt transitions and how can I avoid it?

Try lower shift values (2.0 to 5.0) for lower resolution videos and higher shift values (7.0 to 12.0) for higher resolution images. https://huggingface.co/docs/diffusers/en/api/pipelines/wan#notes

@LostRuins
Copy link
Contributor

Currently, WAN VAE does not support video tiling, and I haven’t tested the feasibility of video tiling yet.

Would it be possible to simply do the VAE per-frame (the entire frame at once). I confess I don't know how it works, but the memory usage for a single frame image is perfectly ok. The problem only comes when doing longer videos with many frames.

@leejet
Copy link
Owner Author

leejet commented Oct 12, 2025

Currently, WAN VAE does not support video tiling, and I haven’t tested the feasibility of video tiling yet.

Would it be possible to simply do the VAE per-frame (the entire frame at once). I confess I don't know how it works, but the memory usage for a single frame image is perfectly ok. The problem only comes when doing longer videos with many frames.

        struct ggml_tensor* decode(struct ggml_context* ctx,
                                   struct ggml_tensor* z,
                                   int64_t b = 1) {
            // z: [b*c, t, h, w]
            GGML_ASSERT(b == 1);

            clear_cache();

            auto decoder = std::dynamic_pointer_cast<Decoder3d>(blocks["decoder"]);
            auto conv2   = std::dynamic_pointer_cast<CausalConv3d>(blocks["conv2"]);

            int64_t iter_ = z->ne[2];
            auto x        = conv2->forward(ctx, z);
            struct ggml_tensor* out;
            for (int64_t i = 0; i < iter_; i++) {
                _conv_idx = 0;
                if (i == 0) {
                    auto in = ggml_slice(ctx, x, 2, i, i + 1);  // [b*c, 1, h, w]
                    out     = decoder->forward(ctx, in, b, _feat_map, _conv_idx, i);
                } else {
                    auto in   = ggml_slice(ctx, x, 2, i, i + 1);  // [b*c, 1, h, w]
                    auto out_ = decoder->forward(ctx, in, b, _feat_map, _conv_idx, i);
                    out       = ggml_concat(ctx, out, out_, 2);
                }
            }
            if (wan2_2) {
                out = unpatchify(ctx, out, 2, b);
            }
            clear_cache();
            return out;
        }

Currently, decoding is done frame by frame, and the compute buffer size used is the same for both 33 frames and 81 frames.

@LostRuins
Copy link
Contributor

Oh, then why is it smaller for something like 1 frame or 5 frames?

@leejet
Copy link
Owner Author

leejet commented Oct 12, 2025

Starting from chunk 1, each chunk depends on data from the previous chunk, so the computation graph is different, causing the compute buffer to grow. In theory, after chunk 1, the compute buffer shouldn’t grow anymore, but in practice, it actually stops growing after chunk 2. I tried creating a separate computation graph for each chunk, and indeed, the buffer no longer grows after chunk 1. However, the results for chunk 1 were a bit odd, so I disabled the related code—you can check the code around build_graph_partial.

By the way, for Wan VAE, the decoding rule for chunks is: chunk 0 corresponds to 1 frame, and starting from chunk 1, each chunk corresponds to 4 frames.

@henk717
Copy link

henk717 commented Oct 12, 2025

As a measure, in our mode for 80 frames (KoboldCpp side) I am measuring 75GB of vram used during the generation on the 14B 2.2. If I use only 10 frames I can fit it on my 3090 fine. So something is balooning the vram usage with higher frame counts.

@LostRuins
Copy link
Contributor

LostRuins commented Oct 13, 2025

what resolution were you generating at?

Also if this logic is correct then 10 frames should take the same amount of memory as 80 frames, but it seems higher.

@leejet
Copy link
Owner Author

leejet commented Oct 13, 2025

Have you used --diffusion-fa? This option can significantly reduce the VRAM usage.

stduhpf added a commit to stduhpf/stable-diffusion.cpp that referenced this pull request Oct 23, 2025
* docs: add sd.cpp-webui as an available frontend (leejet#738)

* fix: correct head dim check and L_k padding of flash attention (leejet#736)

* fix: convert f64 to f32 and i64 to i32 when loading weights

* docs: add LocalAI to README's UIs (leejet#741)

* sync: update ggml

* sync: update ggml

* feat: upgrade musa sdk to rc4.2.0 (leejet#732)

* feat: change image dimensions requirement for DiT models (leejet#742)

* feat: add missing models and parameters to image metadata (leejet#743)

* feat: add new scheduler types, clip skip and vae to image embedded params

- If a non default scheduler is set, include it in the 'Sampler' tag in the data
embedded into the final image.
- If a custom VAE path is set, include the vae name (without path and extension)
in embedded image params under a `VAE:` tag.
- If a custom Clip skip is set, include that Clip skip value in embedded image
params under a `Clip skip:` tag.

* feat: add separate diffusion and text models to metadata

---------

Co-authored-by: one-lithe-rune <skapusniak@lithe-runes.com>

* refector: optimize the usage of tensor_types

* feat: support build against system installed GGML library (leejet#749)

* chore: avoid setting GGML_MAX_NAME when building against external ggml (leejet#751)

An external ggml will most likely have been built with the default
GGML_MAX_NAME value (64), which would be inconsistent with the value
set by our build (128). That would be an ODR violation, and it could
easily cause memory corruption issues due to the different
sizeof(struct ggml_tensor) values.

For now, when linking against an external ggml, we demand it has been
patched with a bigger GGML_MAX_NAME, since we can't check against a
value defined only at build time.

* Conv2D direct support (leejet#744)

* Conv2DDirect for VAE stage

* Enable only for Vulkan, reduced duplicated code

* Cmake option to use conv2d direct

* conv2d direct always on for opencl

* conv direct as a flag

* fix merge typo

* Align conv2d behavior to flash attention's

* fix readme

* add conv2d direct for controlnet

* add conv2d direct for esrgan

* clean code, use enable_conv2d_direct/get_all_blocks

* format code

---------

Co-authored-by: leejet <leejet714@gmail.com>

* sync: update ggml, make cuda im2col a little faster

* chore: add Nvidia 30 series (cuda arch 86) to build

* feat: throttle model loading progress updates (leejet#782)

Some terminals have slow display latency, so frequent output
during model loading can actually slow down the process.

Also, since tensor loading times can vary a lot, the progress
display now shows the average across past iterations instead
of just the last one.

* docs: add missing dash to docs/chroma.md (leejet#771)

* docs: add compile option needed by Ninja (leejet#770)

* feat: show usage on unknown arg (leejet#767)

* fix: typo in the verbose long flag (leejet#783)

* feat: add wan2.1/2.2 support (leejet#778)

* add wan vae suppport

* add wan model support

* add umt5 support

* add wan2.1 t2i support

* make flash attn work with wan

* make wan a little faster

* add wan2.1 t2v support

* add wan gguf support

* add offload params to cpu support

* add wan2.1 i2v support

* crop image before resize

* set default fps to 16

* add diff lora support

* fix wan2.1 i2v

* introduce sd_sample_params_t

* add wan2.2 t2v support

* add wan2.2 14B i2v support

* add wan2.2 ti2v support

* add high noise lora support

* sync: update ggml submodule url

* avoid build failure on linux

* avoid build failure

* update ggml

* update ggml

* fix sd_version_is_wan

* update ggml, fix cpu im2col_3d

* fix ggml_nn_attention_ext mask

* add cache support to ggml runner

* fix the issue of illegal memory access

* unify image loading processing

* add wan2.1/2.2 FLF2V support

* fix end_image mask

* update to latest ggml

* add GGUFReader

* update docs

* feat: add support for timestep boundary based automatic expert routing in Wan MoE (leejet#779)

* Wan MoE: Automatic expert routing based on timestep boundary

* unify code style and fix some issues

---------

Co-authored-by: leejet <leejet714@gmail.com>

* feat: add flow shift parameter (for SD3 and Wan) (leejet#780)

* Add flow shift parameter (for SD3 and Wan)

* unify code style and fix some issues

---------

Co-authored-by: leejet <leejet714@gmail.com>

* docs: update docs and help message

* chore: update to c++17

* docs: update docs/wan.md

* fix: add flash attn support check (leejet#803)

* feat: support incrementing ref image index (omni-kontext) (leejet#755)

* kontext: support  ref images indices

* lora: support x_embedder

* update help message

* Support for negative indices

* support for OmniControl (offsets at index 0)

* c++11 compat

* add --increase-ref-index option

* simplify the logic and fix some issues

* update README.md

* remove unused variable

---------

Co-authored-by: leejet <leejet714@gmail.com>

* feat: add detailed tensor loading time stat (leejet#793)

* fix: clarify lora quant support and small fixes (leejet#792)

* fix: accept NULL in sd_img_gen_params_t::input_id_images_path (leejet#809)

* chore: update flash attention warnings (leejet#805)

* fix: use {} for params init instead of memset (leejet#781)

* chore: remove sd3 flash attention warn (leejet#812)

* feat: use log_printf to print ggml logs (leejet#545)

* chore: add install() support in CMakeLists.txt (leejet#540)

* feat: add SmoothStep Scheduler (leejet#813)

* feat: add sd3 flash attn support (leejet#815)

* fix: make tiled VAE reuse the compute buffer (leejet#821)

* feat: reduce CLIP memory usage with no embeddings (leejet#768)

* fix: make weight override more robust against ggml changes (leejet#760)

* fix: do not force VAE type to f32 on SDXL (leejet#716)

This seems to be a leftover from the initial SDXL support: it's
not enough to avoid NaN issues, and it's not not needed for the
fixed sdxl-vae-fp16-fix .

* feat: use Euler sampling by default for SD3 and Flux (leejet#753)

Thank you for your contribution.

* fix: harden for large files (leejet#643)

* feat: Add SYCL Dockerfile (leejet#651)

* feat: increase work_ctx memory buffer size (leejet#814)

* docs: update docs

* feat: add VAE encoding tiling support and adaptive overlap  (leejet#484)

* implement  tiling vae encode support

* Tiling (vae/upscale): adaptative overlap

* Tiling: fix edge case

* Tiling: fix crash when less than 2 tiles per dim

* remove extra dot

* Tiling: fix edge cases for adaptative overlap

* tiling: fix edge case

* set vae tile size via env var

* vae tiling: refactor again, base on smaller buffer for alignment

* Use bigger tiles for encode (to match compute buffer size)

* Fix edge case when tile is bigger than latent

* non-square VAE tiling (#3)

* refactor tile number calculation

* support non-square tiles

* add env var to change tile overlap

* add safeguards and better error messages for SD_TILE_OVERLAP

* add safeguards and include overlapping factor for SD_TILE_SIZE

* avoid rounding issues when specifying SD_TILE_SIZE as a factor

* lower SD_TILE_OVERLAP limit

* zero-init empty output buffer

* Fix decode latent size

* fix encode

* tile size params instead of env

* Tiled vae parameter validation (#6)

* avoid crash with invalid tile sizes, use 0 for default

* refactor default tile size, limit overlap factor

* remove explicit parameter for relative tile size

* limit encoding tile to latent size

* unify code style and format code

* update docs

* fix get_tile_sizes in decode_first_stage

---------

Co-authored-by: Wagner Bruna <wbruna@users.noreply.github.com>
Co-authored-by: leejet <leejet714@gmail.com>

* feat: add vace support (leejet#819)

* add wan vace t2v support

* add --vace-strength option

* add vace i2v support

* fix the processing of vace_context

* add vace v2v support

* update docs

* feat: optimize tensor loading time (leejet#790)

* opt tensor loading

* fix build failure

* revert the changes

* allow the use of n_threads

* fix lora loading

* optimize lora loading

* add mutex

* use atomic

* fix build

* fix potential duplicate issue

* avoid duplicate lookup of lora tensor

* fix progeress bar

* remove unused remove_duplicates

---------

Co-authored-by: leejet <leejet714@gmail.com>

* refactor: simplify the logic of pm id image loading (leejet#827)

* feat: add sgm_uniform scheduler, simple scheduler, and support for NitroFusion (leejet#675)

* feat: Add timestep shift and two new schedulers

* update readme

* fix spaces

* format code

* simplify SGMUniformSchedule

* simplify shifted_timestep logic

* avoid conflict

---------

Co-authored-by: leejet <leejet714@gmail.com>

* refactor: move tiling cacl and debug print into the tiling code branch (leejet#833)

* refactor: simplify DPM++ (2S) Ancestral (leejet#667)

* chore: set release tag by commit count

* chore: fix workflow (leejet#836)

* fix: avoid multithreading issues in the model loader

* fix: avoid segfault for pix2pix models without reference images (leejet#766)

* fix: avoid segfault for pix2pix models with no reference images

* fix: default to empty reference on pix2pix models to avoid segfault

* use resize instead of reserve

* format code

---------

Co-authored-by: leejet <leejet714@gmail.com>

* refactor: remove unused --normalize-input parameter (leejet#835)

* fix: correct tensor deduplication logic (leejet#844)

* docs: include Vulkan compatibility for LoRA quants (leejet#845)

* docs: HipBLAS / ROCm build instruction fix (leejet#843)

* fix: tensor loading thread count (leejet#854)

* fix: optimize the handling of CLIP embedding weight (leejet#840)

* sync: update ggml

* sync: update ggml

* fix: optimize the handling of embedding weight (leejet#859)

* feat: add support for Flux Controls and Flex.2 (leejet#692)

* docs: update README.md (leejet#866)

* chore: fix dockerfile libgomp1 dependency + improvements (leejet#852)

* fix: ensure directory iteration results are sorted by filename (leejet#858)

* chore: fix vulkan ci (leejet#878)

* feat: add support for more esrgan models & x2 & x1 models (leejet#855)

* feat: add a stand-alone upscale mode (leejet#865)

* feat: add a stand-alone upscale mode

* fix prompt option check

* format code

* update README.md

---------

Co-authored-by: leejet <leejet714@gmail.com>

* refactor: deal with default img-cfg-scale at the library level (leejet#869)

* feat: add Qwen Image support (leejet#851)

* add qwen tokenizer

* add qwen2.5 vl support

* mv qwen.hpp -> qwenvl.hpp

* add qwen image model

* add qwen image t2i pipeline

* fix qwen image flash attn

* add qwen image i2i pipline

* change encoding of vocab_qwen.hpp to utf8

* fix get_first_stage_encoding

* apply jeffbolz f32 patch

leejet#851 (comment)

* fix the issue that occurs when using CUDA with k-quants weights

* optimize the handling of the FeedForward precision fix

* to_add_out precision fix

* update docs

* fix: resolve VAE tiling problem in Qwen Image (leejet#873)

* fix: avoid generating black images when running T5 on the GPU (leejet#882)

* fix: correct canny preprocessor (leejet#861)

* fix: better progress display for second-order samplers (leejet#834)

* feat: add Qwen Image Edit support (leejet#877)

* add ref latent support for qwen image

* optimize clip_preprocess and fix get_first_stage_encoding

* add qwen2vl vit support

* add qwen image edit support

* fix qwen image edit pipeline

* add mmproj file support

* support dynamic number of Qwen image transformer blocks

* set prompt_template_encode_start_idx every time

* to_add_out precision fix

* to_out.0 precision fix

* update docs

---------

Co-authored-by: Daniele <57776841+daniandtheweb@users.noreply.github.com>
Co-authored-by: Erik Scholz <Green-Sky@users.noreply.github.com>
Co-authored-by: leejet <leejet714@gmail.com>
Co-authored-by: Ettore Di Giacinto <mudler@users.noreply.github.com>
Co-authored-by: R0CKSTAR <xiaodong.ye@mthreads.com>
Co-authored-by: stduhpf <stephduh@live.fr>
Co-authored-by: one-lithe-rune <skapusniak@lithe-runes.com>
Co-authored-by: Seas0 <seashkey@gmail.com>
Co-authored-by: NekopenDev <197017459+nekopendev@users.noreply.github.com>
Co-authored-by: SmallAndSoft <45131567+SmallAndSoft@users.noreply.github.com>
Co-authored-by: Markus Hartung <mail@hartmark.se>
Co-authored-by: clibdev <52199778+clibdev@users.noreply.github.com>
Co-authored-by: Richard Palethorpe <io@richiejp.com>
Co-authored-by: rmatif <kingrealriadh@gmail.com>
Co-authored-by: vmobilis <75476228+vmobilis@users.noreply.github.com>
Co-authored-by: Stefan-Olt <stefan-oltmanns@gmx.net>
Co-authored-by: Sharuzzaman Ahmat Raslan <sharuzzaman@gmail.com>
Co-authored-by: Serkan Sahin <14278530+SergeantSerk@users.noreply.github.com>
Co-authored-by: Pedrito <pedro.c.vfx@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.