Skip to content

A Unified Conditional Framework for Diffusion-based Image Restoration

License

Notifications You must be signed in to change notification settings

zhangyi-3/UCDIR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

A Unified Conditional Framework for Diffusion-based Image Restoration

Yi Zhang1, Xiaoyu Shi1, Dasong Li1, Xiaogang Wang1, Jian Wang2, Hongsheng Li1
1The Chinese University of Hong Kong, 2Snap Research

Project

paper


Abstract: Diffusion Probabilistic Models (DPMs) have recently shown remarkable performance in image generation tasks, which are capable of generating highly realistic images. When adopting DPMs for image restoration tasks, the crucial aspect lies in how to integrate the conditional information to guide the DPMs to generate accurate and natural output, which has been largely overlooked in existing works. In this paper, we present a unified conditional framework based on diffusion models for image restoration. We leverage a lightweight UNet to predict initial guidance and the diffusion model to learn the residual of the guidance. By carefully designing the basic module and integration module for the diffusion model block, we integrate the guidance and other auxiliary conditional information into every block of the diffusion model to achieve spatially-adaptive generation conditioning. To handle high-resolution images, we propose a simple yet effective inter-step patch-splitting strategy to produce arbitrary-resolution images without grid artifacts. We evaluate our conditional framework on three challenging tasks: extreme low-light denoising, deblurring, and JPEG restoration, demonstrating its significant improvements in perceptual quality and the generalization to restoration tasks.


Network Architecture

Training

 Coming soon.

Evaluation

SID

  1. Download the denoising model and put it to the folder './experiments/sid/checkpoint'
  2. Download the testing dataset and put it into the folder './dataset'
# inference
CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 python -m torch.distributed.launch \
--nproc_per_node=8 --master_port=4321 \
python -u sr.py -p val -c config/sid.yaml \
--checkpoint experiments/sid/checkpoint/I_Elatest

Download the SID results of our paper, and put it to the folder './experiments/val_sid-ema-s50/results'

# calculate PSNR, SSIM, LPIPS, FID, and KID
python -u eval1.py  \
-s experiments/val_sid-ema-s50/results

Citation

If you use KBNet, please consider citing:

@article{zhang2023UCDIR,
  author    = {Zhang, Yi and Shi, Xiaoyu and Li, Dasong and Wang, Xiaogang and Wang, Jian and Li, Hongsheng},
  title     = {A Unified Conditional Framework for Diffusion-based Image Restoration},
  journal   = {arXiv preprint arXiv:2305.20049},
  year      = {2023},
}

Contact

Should you have any question, please contact zhangyi@link.cuhk.edu.hk

Acknowledgment: BasicSR, S3.

About

A Unified Conditional Framework for Diffusion-based Image Restoration

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages