This repository is for super-resolution survey introduced in the following paper
Saeed Anwar, [Salman Khan], [Nick Barnes], "A Deep Journey into Super-resolution: A survey", ACM Computing Surveys, June 2020. Available at ACM and arXiv
Deep convolutional networks based super-resolution is a fast-growing field with numerous practical applications. In this exposition, we extensively compare 30+ state-of-the-art super-resolution Convolutional Neural Networks (CNNs) over three classical and three recently introduced challenging datasets to benchmark single image super-resolution. We introduce a taxonomy for deep-learning based super-resolution networks that groups existing methods into nine categories including linear, residual, multi-branch, recursive, progressive, attention-based and adversarial designs. We also provide comparisons between the models in terms of network complexity, memory footprint, model input and output, learning details, the type of network losses and important architectural differences (e.g., depth, skip-connections, filters). The extensive evaluation performed, shows the consistent and rapid growth in the accuracy in the past few years along with a corresponding boost in model complexity and the availability of large-scale datasets. It is also observed that the pioneering methods identified as the benchmark have been significantly outperformed by the current contenders. Despite the progress in recent years, we identify several shortcomings of existing techniques and provide future research directions towards the solution of these open problems.
An overview of the existing single-image super-resolution techniques.
A glimpse of diverse range of network architectures used for single-image super-resolution using deep networks.
We compare the state-of-the-art algorithms on publicly available benchmark datasets which include Set5, Set14, BSD100, Urban100, DIV2K and Manga109. Representative test images from six super-resolution datasets used for comparing and evaluating algorithms
Mean PSNR and SSIM for the SR methods evaluated on the benchmark datasets. The ’-’ indicates that the method is not suitable to handle the images of the corresponding dataset.
The results for 8x Super-resolution.
Super-resolution qualitative comparison for CNN-SR algorithms for 4x and 8x Visual comparison for GAN-SR algorithms for 4x
Parameters comparison of CNN-based SR algorithms. GRL stands for Global residual learning, LRL means Local residual learning, MST is abbreviation of Multi-scale training.
Comparison of Multiplication-Addition operations in various SR networks. Note that FLOPs are roughly double the number of mult-adds. Algorithmic runtime (during inference) is proportional to the multi-add operations. Comparison of number of parameters in various SR architectures. The memory footprint and training time of the model is directly related to the number of tunable parameters.If you find the code helpful in your resarch or work, please cite the following papers.
@article{anwar2020deepSR,
author = {Anwar, Saeed and Khan, Salman and Barnes, Nick},
title = {A Deep Journey into Super-Resolution: A Survey},
year = {2020},
issue_date = {June 2020},
publisher = {Association for Computing Machinery (ACM)},
address = {New York, NY, USA},
volume = {53},
number = {3},
issn = {0360-0300},
journal = {ACM Computing Surveys (ACMCSUR)},
month = may,
articleno = {60},
numpages = {34},
}
@article{anwar2019drln,
title={Densely Residual Laplacian Super-Resolution},
author={Anwar, Saeed and Barnes, Nick},
journal={arXiv preprint arXiv:1906.12021},
year={2019}
}