This is official Pytorch implementation of "(Rethinking the necessity of image fusion in high-level vision tasks: A practical infrared and visible image fusion network based on progressive semantic injection and scene fidelity)[https://www.sciencedirect.com/science/article/pii/S1566253523001860]"
@article{TANG2023PSFusion,
title={Rethinking the necessity of image fusion in high-level vision tasks: A practical infrared and visible image fusion network based on progressive semantic injection and scene fidelity},
author={Tang, Linfeng and Zhang, Hao and Xu, Han and Ma, Jiayi},
journal={Information Fusion},
volume = {99},
pages = {101870},
year={2023},
}
The overall framework of the proposed PSFusion.
- torch 1.10.0
- cudatoolkit 11.3.1
- torchvision 0.11.0
- kornia 0.6.5
- pillow 8.3.2
- Downloading the pre-trained checkpoint from best_model.pth and putting it in ./results/PSFusion/checkpoints.
- Downloading the MSRS dataset from MSRS and putting it in ./datasets.
python test_Fusion.py --dataroot=./datasets --dataset_name=MSRS --resume=./results/PSFusion/checkpoints/best_model.pth
If you need to test other datasets, please put the dataset according to the dataloader and specify --dataroot and --dataset-name
Before training PSFusion, you need to download the pre-processed MSRS dataset MSRS and putting it in ./datasets.
Then running python train.py --dataroot=./datasets/MSRS --name=PSFusion
Comparison of fusion and segmentation results between SeAFusion and our method under harsh conditions.
Comparison of the computational complexity between feature-level fusion and image-level fusion for the semantic segmentation task.
The architecture of the superficial detail fusion module (SDFM) based on the channel-spatial attention mechanism.
The architecture of the profound semantic fusion module (PSFM) based on the cross-attention mechanism.
Qualitative comparison of PSFusion with 9 state-of-the-art methods on the **MSRS** dataset.
Qualitative comparison of PSFusion with 9 state-of-the-art methods on the **M3FD** dataset.
Quantitative comparisons of the six metrics on 361 image pairs from the MSRS dataset. A point (x, y) on the curve denotes that there are (100*x)% percent of image pairs which have metric values no more than y.
Quantitative comparisons of the six metrics on 300 image pairs from the M3FD dataset.
Segmentation results of various fusion algorithms on the MSRS dataset.
Per-class segmentation results on the MSRS dataset.
Segmentation results of feature-level fusion-based multi-modal segmentation algorithms and our image-level fusion-based solution on the MFNet dataset.
Per-class segmentation results of image-level fusion and feature-level fusion on the MFNet dataset.
@article{TANG2023PSFusion,
title={Rethinking the necessity of image fusion in high-level vision tasks: A practical infrared and visible image fusion network based on progressive semantic injection and scene fidelity},
author={Tang, Linfeng and Zhang, Hao and Xu, Han and Ma, Jiayi},
journal={Information Fusion},
year={2023},
}