This repository is for RNAN introduced in the following paper
Yulun Zhang, Kunpeng Li, Kai Li, Lichen Wang, Bineng Zhong, and Yun Fu, "Residual Non-local Attention Networks for Image Restoration", ICLR 2019, [OpenReview]
[Demosaic] The code is built on EDSR (PyTorch) and tested on Ubuntu 14.04/16.04 environment (Python3.6, PyTorch_0.3.1, CUDA8.0, cuDNN5.1) with Titan X/1080Ti/Xp GPUs.
-
Download DIV2K training data (800 training + 100 validtion images) from DIV2K dataset or SNU_CVLab.
-
Specify '--dir_data' based on the HR and LR images path. In option.py, '--ext' is set as 'sep_reset', which first convert .png to .npy. If all the training images (.png) are converted to .npy files, then set '--ext sep' to skip converting files.
For more informaiton, please refer to EDSR(PyTorch).
-
(optional) Download models for our paper and place them in '/RNAN/[Task]/experiment/model'. [Task] means CAR, DN_RGB, DN_Gray, Demosaic, or SR.
All the models can be downloaded from Github.
-
Cd to 'RNAN/[Task]/code', run the following scripts to train models.
You can use scripts in file 'Train_RNAN_scripts' to train models for our paper.
# pytorch 0.3.1 # train scripts # RNAN_F64G10P48L2N1: N1 means case 1 python main.py --model RNAN --noise_level 1 --save RNAN_Demosaic_RGB_F64G10P48L2N1 --patch_size 48 --save_results --chop --loss 1*MSE
-
Download models for our paper and place them in '/RNAN/[Task]/experiment/model'. [Task] means CAR, DN_RGB, DN_Gray, Demosaic, or SR.
All the models can be downloaded from Github.
-
Cd to 'RNAN/[Task]/code', run the following scripts.
You can use scripts in file 'Test_RNAN_scripts' to produce results for our paper.
# pytorch 0.3.1 # test scripts # No self-ensemble, use different testsets (Kodak24, CBSD68, McMaster18, Urban100) to reproduce the results in the paper. # case 1 python main.py --model RNAN --data_test Demo --noise_level 1 --save Test_RNAN --n_cab_1 20 --save_results --test_only --chop --pre_train ../experiment/model/RNAN_Demosaic_RGB_F64G10P48L2N1.pt --testpath ../experiment/LQ --testset Kodak24
-
Prepare test data.
Place the original test sets (e.g., Set5, other test sets are available from GoogleDrive or Baidu) in 'OriginalTestData'.
Run 'Prepare_TestData_HR_LR.m' in Matlab to generate HR/LR images with different degradation models.
-
Conduct image Demosaic.
See Quick start
-
Evaluate the results.
Run 'Evaluate_PSNR_SSIM.m' to obtain PSNR/SSIM values for paper.
If you find the code helpful in your resarch or work, please cite the following papers.
@InProceedings{Lim_2017_CVPR_Workshops,
author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Lee, Kyoung Mu},
title = {Enhanced Deep Residual Networks for Single Image Super-Resolution},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}
@inproceedings{zhang2019rnan,
title={Residual Non-local Attention Networks for Image Restoration},
author={Zhang, Yulun and Li, Kunpeng and Li, Kai and Zhong, Bineng and Fu, Yun},
booktitle={ICLR},
year={2019}
}
This code is built on EDSR (PyTorch). We thank the authors for sharing their codes of EDSR Torch version and PyTorch version.