Jinpei Guo, Yifei Ji, Zheng Chen, Yufei Wang, Sizhuo Ma, Yong Guo, Yulun Zhang, and Jian Wang, "Towards Redundancy Reduction in Diffusion Models for Efficient Video Super-Resolution", arXiv, 2025
[project page] [paper] [supplementary material]
- 2025-09-30: This repo is released.
Abstract: Diffusion models have recently shown promising results for video super-resolution (VSR). However, directly adapting generative diffusion models to VSR can result in redundancy, since low-quality videos already preserve substantial content information. Such redundancy leads to increased computational overhead and learning burden, as the model performs superfluous operations and must learn to filter out irrelevant information. To address this problem, we propose OASIS, an efficient an efficient one-step diffusion model with attention specialization for real-world video super-resolution. OASIS incorporates an attention specialization routing that assigns attention heads to different patterns according to their intrinsic behaviors. This routing mitigates redundancy while effectively preserving pretrained knowledge, allowing diffusion models to better adapt to VSR and achieve stronger performance. Moreover, we propose a simple yet effective progressive training strategy, which starts with temporally consistent degradations and then shifts to inconsistent settings. This strategy facilitates learning under complex degradations. Extensive experiments demonstrate that OASIS achieves state-of-the-art performance on both synthetic and real-world datasets. OASIS also provides superior inference speed, offering a 6.2ร speedup over one-step diffusion baselines such as SeedVR2.
- Release code and pretrained models
โจ For more visual results, visit our project page! โจ
If you find the code helpful in your research or work, please cite our work.
@article{guo2025towards,
title={Towards Redundancy Reduction in Diffusion Models for Efficient Video Super-Resolution},
author={Guo, Jinpei and Ji, Yifei and Chen, Zheng and Wang, Yufei and Ma, Sizhuo and Guo, Yong and Zhang, Yulun and Wang, Jian},
journal={arXiv preprint arXiv:2509.23980},
year={2025}
}




