By Yuyan Chen.
| 算法 | MSE | PNSR | SSIM | CIEDE2000 | |
|---|---|---|---|---|---|
| O-Haze | baseline | 0.0038 | 24.304 | 0.7192 | 4.7643 |
| improve | 0.0037 | 24.436 | 0.7242 | 4.7547 | |
| HazeRD | baseline | 0.0679 | 14.481 | 0.8314 | 16.161 |
| improve | 0.0695 | 14.584 | 0.8315 | 16.036 |
The dehazing results can be found at Baidu Wangpan.
Make sure you have Python>=3.7 installed on your machine.
Environment setup:
-
Create conda environment
conda create -n dm2f conda activate dm2f -
Install dependencies (test with PyTorch 1.8.0):
-
Install pytorch==1.8.0 torchvision==0.9.0 (via conda, recommend).
-
Install other dependencies
pip install -r requirements.txt
-
-
Prepare the dataset
-
Download the RESIDE dataset from the official webpage.
-
Download the O-Haze dataset from the official webpage.
-
Make a directory
./dataand create a symbolic link for uncompressed data, e.g.,./data/RESIDE.
-
- Set the path of datasets in
tools/config.py - Run by
python train.py
Use pretrained ResNeXt (resnext101_32x8d) from torchvision.
Training a model on a single RTX 3080 Ti(12GB) GPU takes about 6 hours.
- Set the path of five benchmark datasets in
tools/config.py. - Put the trained model in
./ckpt/. - Run by
python test.py(for O-Haze/HazeRD/RESIDE) orpython output.py(for own pictures)
Settings of testing were set at the top of test.py, and you can conveniently
change them as you need.
DM2F-Net is released under the MIT license.
If you find the paper or the code helpful to your research, please cite the project.
@inproceedings{deng2019deep,
title={Deep multi-model fusion for single-image dehazing},
author={Deng, Zijun and Zhu, Lei and Hu, Xiaowei and Fu, Chi-Wing and Xu, Xuemiao and Zhang, Qing and Qin, Jing and Heng, Pheng-Ann},
booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
pages={2453--2462},
year={2019}
}