Skip to content

ArcticHare105/S3Diff

Repository files navigation

Degradation-Guided One-Step Image Super-Resolution with Diffusion Priors

      visitors

Aiping Zhang1*, Zongsheng Yue2,*, Renjing Pei3, Wenqi Ren1, Xiaochun Cao1

1School of Cyber Science and Technology, Shenzhen Campus of Sun Yat-sen University
2S-Lab, Nanyang Technological University
3Huawei Noah's Ark Lab
* Equal contribution.

🔥🔥🔥 We have released the code, cheers!

⭐ If S3Diff is helpful for you, please help star this repo. Thanks! 🤗

📖 Table Of Contents

🆕 Update

  • 2024.10.07: Add gradio demo 🚀
  • 2024.09.25: The code is released 🔥
  • 2024.09.25: This repo is released 🔥

⌛ TODO

  • Release Code 💻
  • Release Checkpoints 🔗

🎆 Abstract

Diffusion-based image super-resolution (SR) methods have achieved remarkable success by leveraging large pre-trained text-to-image diffusion models as priors. However, these methods still face two challenges: the requirement for dozens of sampling steps to achieve satisfactory results, which limits efficiency in real scenarios, and the neglect of degradation models, which are critical auxiliary information in solving the SR problem. In this work, we introduced a novel one-step SR model, which significantly addresses the efficiency issue of diffusion-based SR methods. Unlike existing fine-tuning strategies, we designed a degradation-guided Low-Rank Adaptation (LoRA) module specifically for SR, which corrects the model parameters based on the pre-estimated degradation information from low-resolution images. This module not only facilitates a powerful data-dependent or degradation-dependent SR model but also preserves the generative prior of the pre-trained diffusion model as much as possible. Furthermore, we tailor a novel training pipeline by introducing an online negative sample generation strategy. Combined with the classifier-free guidance strategy during inference, it largely improves the perceptual quality of the super-resolution results. Extensive experiments have demonstrated the superior efficiency and effectiveness of the proposed model compared to recent state-of-the-art methods.

👀 Framework Overview

⭐ Overview of S3Diff. We enhance a pre-trained diffusion model for one-step SR by injecting LoRA layers into the VAE encoder and UNet. Additionally, we employ a pre-trained Degradation Estimation Network to assess image degradation that is used to guide the LoRAs with the introduced block ID embeddings. We tailor a new training pipeline that includes an online negative prompting, reusing generated LR images with negative text prompts. The network is trained with a combination of a reconstruction loss and a GAN loss.

📈 Visual Comparison

Image Slide Results

Synthesis Dataset

Real-World Dataset

⚙️ Setup

conda create -n s3diff python=3.10
conda activate s3diff
pip install -r requirements.txt

Or use the conda env file that contains all the required dependencies.

conda env create -f environment.yaml

⭐ Since we employ peft in our code, we highly recommend following the provided environmental requirements, especially regarding diffusers.

🔧 Training

Step1: Download the pretrained models

We enable automatic model download in our code, if you need to conduct offline training, download the pretrained model SD-Turbo

Step2: Prepare training data

We train the S3Diff on LSDIR + 10K samples from FFHQ, following SeeSR and OSEDiff.

Step3: Training for S3Diff

Please modify the paths to training datasets in configs/sr.yaml Then run:

sh run_training.sh

If you need to conduct offline training, modify run_training.sh as follows, and fill in sd_path with your local path:

accelerate launch --num_processes=4 --gpu_ids="0,1,2,3" --main_process_port 29300 src/train_s3diff.py \
    --sd_path="path_to_checkpoints/sd-turbo" \
    --de_net_path="assets/mm-realsr/de_net.pth" \
    --output_dir="./output" \
    --resolution=512 \
    --train_batch_size=4 \
    --enable_xformers_memory_efficient_attention \
    --viz_freq 25

💫 Inference

Step1: Download datasets for inference

Step2: Download the pretrained models

We enable automatic model download in our code, if you need to conduct offline inference, download the pretrained model SD-Turbo and S3Diff [HuggingFace | GoogleDrive]

Step3: Inference for S3Diff

Please add the paths to evaluate datasets in configs/sr_test.yaml and the path of GT folder in run_inference.sh Then run:

sh run_inference.sh

If you need to conduct offline inference, modify run_inference.sh as follows, and fill in with your paths:

accelerate launch --num_processes=1 --gpu_ids="0," --main_process_port 29300 src/inference_s3diff.py \
    --sd_path="path_to_checkpoints/sd-turbo" \
    --de_net_path="assets/mm-realsr/de_net.pth" \
    --pretrained_path="path_to_checkpoints/s3diff.pkl" \
    --output_dir="./output" \
    --ref_path="path_to_ground_truth_folder" \
    --align_method="wavelet"

Gradio Demo

Please install Gradio first

pip install gradio

Please run the following command to interact with the gradio website, have fun. 🤗

python src/gradio_s3diff.py 

s3diff

😃 Citation

Please cite us if our work is useful for your research.

@article{2024s3diff,
  author    = {Aiping Zhang, Zongsheng Yue, Renjing Pei, Wenqi Ren, Xiaochun Cao},
  title     = {Degradation-Guided One-Step Image Super-Resolution with Diffusion Priors},
  journal   = {arxiv},
  year      = {2024},
}

📓 License

This project is released under the Apache 2.0 license.

✉️ Contact

If you have any questions, please feel free to contact zhangaip7@mail2.sysu.edu.cn.

About

Official implementation of S3Diff

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published