The official implementation of AAAI24 paper Multi-Domain Multi-Scale Diffusion Model for Low-Light Image Enhancement.
create a new conda env, and run
$ pip install -r requirements.txt
torch/torchvision with CUDA version >= 11.3 should be fine.
Download the Pretrained MDMS model from Baidu NetDisk or Google Drive.
Put the downloaded ckpt in datasets/scratch/LLIE/ckpts
.
# in {path_to_this_repo}/,
$ python eval_diffusion.py
Put the test input in datasets/scratch/LLIE/data/lowlight/test/input
.
Output results will be saved in results/images/lowlight/lowlight
.
Put the test GT in datasets/scratch/LLIE/data/lowlight/test/gt
for paired evaluation.
# in {path_to_this_repo}/,
$ python evaluation.py
- Note that our evaluation metrics are slightly different from PyDiff (inherited from BasicSR).
All results listed in our paper including the compared methods are available in Baidu Netdisk or Google Drive.
- Note that the provided model is trained on the LOLv1 training set, but generalizes well on other datasets.
- For SSIM, we directly calculate the performance on RGB channel rather than just grayscale images in PyDiff.
- For LPIPS, we use a different normalization method (NormA) compared to PyDiff (NormB).
Our method remains superior under the same setting as PyDiff.
We will perform more training and tests on other datasets in the future.
Put the training dataset in datasets/scratch/LLIE/data/lowlight/train
.
# in {path_to_this_repo}/,
$ python train_diffusion.py
Detailed training instructions will be updated soon.
If you find this paper useful, please consider staring this repo and citing our paper:
@inproceedings{shang2024multi,
title={Multi-Domain Multi-Scale Diffusion Model for Low-Light Image Enhancement},
author={Shang, Kai and Shao, Mingwen and Wang, Chao and Cheng, Yuanshuo and Wang, Shuigen},
booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
volume={38},
number={5},
pages={4722--4730},
year={2024}
}