Yixiao Wang*, Ting Jiang*, Zishan Shao*, Hancheng Ye, Jingwei Sun, Mingyuan Ma, Jianyi Zhang, Yiran Chen, Hai Li
Duke University, * Equal contribution.
What does it minimally take to accelerate generative models—without training?
*Fig. 1. Overview of ZEUS pipeline.*⚡ZEUS can be directly adapted into any 🤗Huggingface Diffuser workflows. Start a new environment with:
conda create -n zeus python=3.10
conda activate zeus
pip install -r requirements.txtWe provide the following demos to test ZEUS. Simply run:
python sd_demo.py python xl_demo.py python flux_demo.py python wan2_demo.py python cogvideo_demo.py with --solver {dpm|euler}, --prompt, and --seed
from zeus import patch
patch.apply_patch(pipe,
acc_range=(10, 45), # when to apply ZEUS
interp_mode="psi",
caching_mode="reuse_interp", # default: ZEUS pattrn
denominator=3, # sparsity ratio
modular=(0,1, ),
lagrange_int=4,
lagrange_step=24,
lagrange_term=4
)If you find this work useful, please cite our paper:
@misc{zeus2025,
title = {ZEUS: Zero-shot Efficient Unified Sparsity for Generative Models},
author = {Yixiao Wang and Ting Jiang and Zishan Shao and Hancheng Ye and Jingwei Sun and Mingyuan Ma and Jianyi Zhang and Yiran Chen and Hai Li},
year = {2025},
howpublished = {https://yixiao-wang-stats.github.io/zeus/},
note = {Code and project page available at {https://github.com/Ting-Justin-Jiang/ZEUS}}
}ZEUS codebase is build upon the excellent work of SADA, Huggingface Diffuser and ToMeSD
