Skip to content

Zehong-Ma/ComfyUI-MagCache

Repository files navigation

ComfyUI-MagCache

🫖 Introduction

Magnitude-aware Cache (MagCache) is a training-free caching approach. It estimates the fluctuating differences among model outputs across timesteps based on the robust magnitude observations, and thereby accelerating the inference using the error modeling mechanism and adaptive cache strategy. MagCache works well for Video Diffusion Models, Image Diffusion models. For more details and results, please visit our project page and code.

MagCache has now been integrated into ComfyUI and is compatible with the ComfyUI native nodes. ComfyUI-MagCache is easy to use, simply connect the MagCache node with the ComfyUI native nodes for seamless usage.

🔥 Latest News

  • If you like our project, please give us a star ⭐ on GitHub for the latest update.
  • [2025/11/23] 🔥 Support Qwen-Image officially, achieving a 1.75x acceleration.
  • [2025/11/22] 🔥 Support HunyuanVideo-1.5 officially, achieving a 1.7x acceleration.
  • [2025/7/2] 🔥 Support Wan2.1-VACE-14B officially. Thanks @Qentah.
  • [2025/6/30] 🔥 Support Flux-Kontext with 2x speedup. Please see the demo here.
  • [2025/6/19] 🔥 Support FramePack with Gradio Demo in MagCache-FramePack.
  • [2025/6/18] 👉 We're collecting the best parameter settings from the community.
    👉Open this discussion issue to contribute your configuration!
  • [2025/6/17] 🔥 Support Wan2.1-VACE-1.3B, achieving a 1.7× acceleration.
  • [2025/6/17] 🔥 MagCache is supported by ComfyUI-WanVideoWrapper. Thanks @kijai.
  • [2025/6/16] 🔥 Support Chroma. Thanks @kabachuha for the codebase.
  • [2025/6/10] 🔥 Support Wan2.1 T2V&I2V, HunyuanVideo T2V, FLUX-dev T2I

Installation

  1. Go to comfyUI custom_nodes folder, ComfyUI/custom_nodes/
  2. git clone https://github.com/zehong-ma/ComfyUI-MagCache.git
  3. Go to ComfyUI-MagCache folder, cd ComfyUI-MagCache/
  4. pip install -r requirements.txt
  5. Go to the project folder ComfyUI/ and run python main.py

Usage

Download Model Weights

Please first to prepare the model weights in ComfyUI format by referring to the follow links:

MagCache

We're collecting the best parameter settings from the community. Open this discussion issue to contribute your configuration!

To use MagCache node, simply add MagCache node to your workflow after Load Diffusion Model node or Load LoRA node (if you need LoRA). Generally, MagCache can achieve a speedup of 2x to 3x with acceptable visual quality loss. The following table gives the recommended magcache_thresh, retention_ratio and magcache_K ​for different models:

Models magcache_thresh retention_ratio magcache_K
FLUX 0.24 0.1 5
FLUX-Kontext 0.05 0.2 4
Chroma 0.10 0.25 2
Qwen-Image 0.10 0.20 2
HunyuanVideo-T2V 0.24 0.2 6
HunyuanVideo1.5-T2V(20 steps) 0.03 0.25 2
Wan2.1-T2V-1.3B 0.12 0.2 4
Wan2.1-T2V-14B 0.24 0.2 6
Wan2.1-I2V-480P-14B 0.24 0.2 6
Wan2.1-I2V-720P-14B 0.24 0.2 6
Wan2.1-VACE-1.3B 0.02 0.2 3
Wan2.1-VACE-14B 0.02 0.2 3

If the image/video after applying MagCache is of low quality, please decrease magcache_thresh and magcache_K. The default parameters are configured for extremely fast inference(2x-3x), which may lead to failures in some cases.

The demo workflows (flux, flux-kontext, qwen-image, chroma, hunyuanvideo, hunyuanvideo1.5, wan2.1_t2v, wan2.1_i2v, and wan2.1_vace) are placed in examples folder. The workflow chroma_calibration is used to calibrate the mag_ratios for Chroma when the number of inference steps differs from the pre-defined value. In our experiments, the videos generated by Wan2.1 are not as high-quality as those produced by the original unquantized version.

Compile Model

To use Compile Model node, simply add Compile Model node to your workflow after Load Diffusion Model node or MagCache node. Compile Model uses torch.compile to enhance the model performance by compiling model into more efficient intermediate representations (IRs). This compilation process leverages backend compilers to generate optimized code, which can significantly speed up inference. The compilation may take long time when you run the workflow at first, but once it is compiled, inference is extremely fast.

Acknowledgments

Thanks to ComfyUI-TeaCache, ComfyUI, ComfyUI-MagCache, MagCache, TeaCache, HunyuanVideo, FLUX, Chroma, Qwen-Image, and Wan2.1.

About

The official code that integrates MagCache (Fast Video Generation with Magnitude-Aware Cache) with ComfyUI.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages