[2024-07-16] 我们的论文《DRMF: Degradation-Robust Multi-Modal Image Fusion via Composable Diffusion Prior》被《ACM MM 2024》正式接收![论文下载] [Code]
[2023-06-05] 我们的论文《Rethinking the necessity of image fusion in high-level vision tasks: A practical infrared and visible image fusion network based on progressive semantic injection and scene fidelity》被《Information Fusion》正式接收![论文下载] [Code]
[2022-07-29] 我们的综述论文《基于深度学习的图像融合方法综述》被《中国图象图形学报》正式接收![论文下载]
- Citation
- 多模图像融合(Multi-Modal Image Fusion)
- 数字摄影图像融合(Digital Photography Image Fusion)
- 遥感影像融合(Remote Sensing Image Fusion)
- 通用图像融合框架(General Image Fusion Framerwork)
- 综述(Survey)
- 数据集(Dataset)
- 评估指标(Evaluation Metric)
如果我们的总结对你有所帮助, 请引用以下论文:
@article{Tang2022Survey,
title={Deep learning-based image fusion: A survey},
author={Tang, Linfeng and Zhang, Hao and Xu, Han and Ma, Jiayi},
journal={Journal of Image and Graphics}
volume={28},
number={1},
pages={3--36},
year={2023}
}
@inproceedings{Tang2024DRMF,
title={DRMF: Degradation-Robust Multi-Modal Image Fusion via Composable Diffusion Prior},
author={Tang, Linfeng and Deng, Yuxin and Yi, Xunpeng and Yan, Qinglong and Yuan, Yixuan and Ma, Jiayi},
booktitle=Proceedings of the ACM International Conference on Multimedia,
year={2024}
}
@article{Tang2024CAMF,
title={CAMF: An Interpretable Infrared and Visible Image Fusion Network Based on Class Activation Mapping},
author={Tang, Linfeng and Chen, Ziang and Huang, Jun and Ma, Jiayi},
journal={IEEE Transactions on Multimedia},
year={2024},
volume={26},
pages={4776-4791},
publisher={IEEE}
}
@article{Tang2022SuperFusion,
title={SuperFusion: A versatile image registration and fusion network with semantic awareness},
author={Tang, Linfeng and Deng, Yuxin and Ma, Yong and Huang, Jun and Ma, Jiayi},
journal={IEEE/CAA Journal of Automatica Sinica},
volume={9},
number={12},
pages={2121--2137},
year={2022},
publisher={IEEE}
}
@article{Ma2022SwinFusion,
title={SwinFusion: Cross-domain Long-range Learning for General Image Fusion via Swin Transformer},
author={Ma, Jiayi and Tang, Linfeng and Fan, Fan and Huang, Jun and Mei, Xiaoguang and Ma, Yong},
journal={IEEE/CAA Journal of Automatica Sinica},
volume={9},
number={7},
pages={1200--1217},
year={2022},
publisher={IEEE}
}
@article{TangSeAFusion,
title = {Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network},
author = {Linfeng Tang and Jiteng Yuan and Jiayi Ma},
journal = {Information Fusion},
volume = {82},
pages = {28-42},
year = {2022},
issn = {1566-2535},
publisher={Elsevier}
}
@article{Tang2022DIVFusion,
title={DIVFusion: Darkness-free infrared and visible image fusion},
author={Tang, Linfeng and Xiang, Xinyu and Zhang, Hao and Gong, Meiqi and Ma, Jiayi},
journal={Information Fusion},
volume = {91},
pages = {477-493},
year = {2023},
publisher={Elsevier}
}
@article{Tang2022PIAFusion,
title={PIAFusion: A progressive infrared and visible image fusion network based on illumination aware},
author={Tang, Linfeng and Yuan, Jiteng and Zhang, Hao and Jiang, Xingyu and Ma, Jiayi},
journal={Information Fusion},
volume = {83-84},
pages = {79-92},
year = {2022},
issn = {1566-2535},
publisher={Elsevier}
}
@article{Ma2021STDFusionNet,
title={STDFusionNet: An Infrared and Visible Image Fusion Network Based on Salient Target Detection},
author={Jiayi Ma, Linfeng Tang, Meilong Xu, Hao Zhang, and Guobao Xiao},
journal={IEEE Transactions on Instrumentation and Measurement},
year={2021},
volume={70},
number={},
pages={1-13},
doi={10.1109/TIM.2021.3075747},
publisher={IEEE}
}
方法 | 标题 | 论文 | 代码 | 发表期刊或会议 | 基础框架 | 监督范式 | 发表年份 |
---|---|---|---|---|---|---|---|
DenseFuse | DenseFuse: A Fusion Approach to Infrared and Visible Images | Paper | Code | TIP | AE | 自监督 | 2019 |
FusionGAN | FusionGAN: A generative adversarial network for infrared and visible image fusion | Paper | Code | InfFus | GAN | 无监督 | 2019 |
DDcGAN | Learning a Generative Model for Fusing Infrared and Visible Images via Conditional Generative Adversarial Network with Dual Discriminators | Paper | Code | IJCAI | GAN | 无监督 | 2019 |
NestFuse | NestFuse: An Infrared and Visible Image Fusion Architecture Based on Nest Connection and Spatial/Channel Attention Models | Paper | Code | TIM | AE | 自监督 | 2020 |
DDcGAN | DDcGAN: A dual-discriminator conditional generative adversarial network for multi-resolution image fusion | Paper | Code | TIP | GAN | 无监督 | 2020 |
DIDFuse | DIDFuse: Deep Image Decomposition for Infrared and Visible Image Fusion | Paper | Code | IJCAI | AE | 自监督 | 2020 |
RFN-Nest | RFN-Nest: An end-to-end residual fusion network for infrared and visible images | Paper | Code | InfFus | AE | 自监督 | 2021 |
CSF | Classification Saliency-Based Rule for Visible and Infrared Image Fusion | Paper | Code | TCI | AE | 自监督 | 2021 |
DRF | DRF: Disentangled Representation for Visible and Infrared Image Fusion | Paper | Code | TIM | AE | 自监督 | 2021 |
SEDRFuse | SEDRFuse: A Symmetric Encoder–Decoder With Residual Block Network for Infrared and Visible Image Fusion | Paper | Code | TIM | AE | 自监督 | 2021 |
MFEIF | Learning a Deep Multi-Scale Feature Ensemble and an Edge-Attention Guidance for Image Fusion | Paper | TCSVT | AE | 自监督 | 2021 | |
Meta-Learning | Different Input Resolutions and Arbitrary Output Resolution: A Meta Learning-Based Deep Framework for Infrared and Visible Image Fusion | Paper | TIP | CNN | 无监督 | 2021 | |
RXDNFuse | RXDNFuse: A aggregated residual dense network for infrared and visible image fusion | Paper | Code | InfFus | CNN | 无监督 | 2021 |
STDFusionNet | STDFusionNet: An Infrared and Visible Image Fusion Network Based on Salient Target Detection | Paper | Code | TIM | CNN | 无监督 | 2021 |
D2LE | A Bilevel Integrated Model With Data-Driven Layer Ensemble for Multi-Modality Image Fusion | Paper | TIP | CNN | 无监督 | 2021 | |
HAF | Searching a Hierarchically Aggregated Fusion Architecture for Fast Multi-Modality Image Fusion | Paper | Code | ACM MM | CNN | 无监督 | 2021 |
SDDGAN | Semantic-supervised Infrared and Visible Image Fusion via a Dual-discriminator Generative Adversarial Network | Paper | Code | TMM | GAN | 无监督 | 2021 |
Detail-GAN | Infrared and visible image fusion via detail preserving adversarial learning | Paper | Code | InfFus | GAN | 无监督 | 2021 |
Perception-GAN | Image fusion based on generative adversarial network consistent with perception | Paper | Code | InfFus | GAN | 无监督 | 2021 |
GAN-FM | GAN-FM: Infrared and Visible Image Fusion Using GAN With Full-Scale Skip Connection and Dual Markovian Discriminators | Paper | Code | TCI | GAN | 无监督 | 2021 |
AttentionFGAN | AttentionFGAN: Infrared and Visible Image Fusion Using Attention-Based Generative Adversarial Networks | Paper | TMM | GAN | 无监督 | 2021 | |
GANMcC | GANMcC: A Generative Adversarial Network With Multiclassification Constraints for Infrared and Visible Image Fusion | Paper | Code | TIM | GAN | 无监督 | 2021 |
MgAN-Fuse | Multigrained Attention Network for Infrared and Visible Image Fusion | Paper | TIM | GAN | 无监督 | 2021 | |
TC-GAN | Infrared and Visible Image Fusion via Texture Conditional Generative Adversarial Network | Paper | TCSVT | GAN | 无监督 | 2021 | |
AUIF | Efficient and model-based infrared and visible image fusion via algorithm unrolling | Paper | Code | TCSVT | AE | 自监督 | 2021 |
TarDAL | Target-aware Dual Adversarial Learning and a Multi-scenario Multi-Modality Benchmark to Fuse Infrared and Visible for Object Detection | Paper | Code | CVPR | GAN | 无监督 | 2022 |
RFNet | RFNet: Unsupervised Network for Mutually Reinforcing Multi-modal Image Registration and Fusion | Paper | Code | CVPR | CNN | 无监督 | 2022 |
SeAFusion | Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network | Paper | Code | InfFus | CNN | 无监督 | 2022 |
PIAFusion | PIAFusion: A progressive infrared and visible image fusion network based on illumination aware | Paper | Code | InfFus | CNN | 无监督 | 2022 |
UMF-CMGR | Unsupervised Misaligned Infrared and Visible Image Fusion via Cross-Modality Image Generation and Registration | Paper | Code | IJCAI | CNN | 无监督 | 2022 |
DetFusion | DetFusion: A Detection-driven Infrared and Visible Image Fusion Network | Paper | Code | ACM MM | CNN | 无监督 | 2022 |
DIVFusion | DIVFusion: Darkness-free infrared and visible image fusion | Paper | Code | InfFus | CNN | 无监督 | 2023 |
PSFusion | Rethinking the necessity of image fusion in high-level vision tasks: A practical infrared and visible image fusion network based on progressive semantic injection and scene fidelity | Paper | Code | InfFus | CNN | 无监督 | 2023 |
方法 | 标题 | 论文 | 代码 | 发表期刊或会议 | 基础框架 | 监督范式 | 年份 |
---|---|---|---|---|---|---|---|
CNN | A medical image fusion method based on convolutional neural networks | Paper | ICIF | CNN | 无监督 | 2017 | |
Zero-LMF | Zero-Learning Fast Medical Image Fusion | Paper | Code | ICIF | CNN | 无监督 | 2019 |
DDcGAN | Learning a Generative Model for Fusing Infrared and Visible Images via Conditional Generative Adversarial Network with Dual Discriminators | Paper | Code | IJCAI | GAN | 无监督 | 2019 |
GFPPC-GAN | Green Fluorescent Protein and Phase-Contrast Image Fusion via Generative Adversarial Networks | Paper | CMMM | GAN | 无监督 | 2019 | |
CCN-CP | Multi-modality medical image fusion using convolutional neural network and contrast pyramid | Paper | Sensors | CNN | 无监督 | 2020 | |
DDcGAN | DDcGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion | Paper | Code | TIP | GAN | 无监督 | 2020 |
MGMDcGAN | Medical Image Fusion Using Multi-Generator Multi-Discriminator Conditional Generative Adversarial Network | Paper | Code | Access | GAN | 无监督 | 2020 |
D2LE | A Bilevel Integrated Model With Data-Driven Layer Ensemble for Multi-Modality Image Fusion | Paper | TIP | CNN | 无监督 | 2021 | |
HAF | Searching a Hierarchically Aggregated Fusion Architecture for Fast Multi-Modality Image Fusion | Paper | Code | ACM MM | CNN | 无监督 | 2021 |
EMFusion | EMFusion: An unsupervised enhanced medical image fusion network | Paper | Code | InfFus | CNN | 无监督 | 2021 |
DPCN-Fusion | Green Fluorescent Protein and Phase Contrast Image Fusion Via Detail Preserving Cross Network | Paper | Code | TCI | CNN | 无监督 | 2021 |
MSPRN | A multiscale residual pyramid attention network for medical image fusion | Paper | Code | BSPC | CNN | 无监督 | 2021 |
DCGAN | Medical image fusion method based on dense block and deep convolutional generative adversarial network | Paper | NCA | GAN | 无监督 | 2021 |
方法 | 标题 | 论文 | 代码 | 发表期刊或会议 | 基础框架 | 监督范式 | 年份 |
---|---|---|---|---|---|---|---|
DeepFuse | DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs | Paper | Code | ICCV | CNN | 无监督 | 2017 |
CNN | Multi-exposure fusion with CNN features | Paper | Code | ICIP | CNN | 无监督 | 2018 |
MEF-Net | Deep guided learning for fast multi-exposure image fusion | Paper | Code | TIP | CNN | 无监督 | 2020 |
ICEN | Multi-exposure high dynamic range imaging with informative content enhanced network | Paper | NC | CNN | 无监督 | 2020 | |
MEF-GAN | MEF-GAN: Multi-Exposure Image Fusion via Generative Adversarial Networks | Paper | Code | TIP | GAN | 无监督 | 2020 |
CF-Net | Deep coupled feedback network for joint exposure fusion and image super-resolutions | Paper | Code | TIP | CNN | 无监督 | 2021 |
UMEF | Deep unsupervised learning based on color un-referenced loss functions for multi-exposure image fusion | Paper | Code | InFus | CNN | 无监督 | 2021 |
PA-AGN | Two exposure fusion using prior-aware generative adversarial network | Paper | TMM | GAN | 无监督 | 2021 | |
AGAL | Attention-guided Global-local Adversarial Learning for Detail-preserving Multi-exposure Image Fusion | Paper | Code | TCSVT | GAN | 无监督 | 2022 |
GANFuse | GANFuse: a novel multi-exposure image fusion method based on generative adversarial networks | Paper | NCAA | GAN | 无监督 | 2021 | |
DRLF | Automatic Intermediate Generation With Deep Reinforcement Learning for Robust Two-Exposure Image Fusion | Paper | TNNLS | CNN | 无监督 | 2021 | |
Trans-MEF | TransMEF: A Transformer-Based Multi-Exposure Image Fusion Framework using Self-Supervised Multi-Task Learning | Paper | Code | AAAI | AE | 自监督 | 2022 |
DPE-MEF | Multi-exposure image fusion via deep perceptual enhancement | Paper | Code | InFus | CNN | 无监督 | 2022 |
方法 | 标题 | 论文 | 代码 | 发表期刊或会议 | 基础框架 | 监督范式 | 年份 |
---|---|---|---|---|---|---|---|
CNN | Multi-focus image fusion with a deep convolutional neural network | Paper | Code | InFus | CNN | 有监督 | 2017 |
ECNN | Ensemble of CNN for multi-focus image fusion | Paper | Code | InFus | CNN | 有监督 | 2019 |
MLFCNN | Multilevel features convolutional neural network for multifocus image fusion | Paper | TCI | CNN | 有监督 | 2019 | |
DRPL | DRPL: Deep Regression Pair Learning for Multi-Focus Image Fusion | Paper | Code | TIP | CNN | 有监督 | 2020 |
MMF-Net | An α-Matte Boundary Defocus Model-Based Cascaded Network for Multi-Focus Image Fusion | Paper | Code | TCI | CNN | 有监督 | 2020 |
MFF-SSIM | Towards Reducing Severe Defocus Spread Effects for Multi-Focus Image Fusion via an Optimization Based Strategy | Paper | Code | Sensors | CNN | 无监督 | 2020 |
MFNet | Structural Similarity Loss for Learning to Fuse Multi-Focus Images | Paper | TIP | CNN | 有监督 | 2021 | |
GEU-Net | Global-Feature Encoding U-Net (GEU-Net) for Multi-Focus Image Fusion [GEU-Net | Paper | Code | TCI | CNN | 自监督 | 2021 |
DTMNet | DTMNet: A Discrete Tchebichef Moments-Based Deep Neural Network for Multi-Focus Image Fusion | Paper | TMM | CNN | 无监督 | 2021 | |
SMFuse | SMFuse: Multi-Focus Image Fusion Via Self-Supervised Mask-Optimization | Paper | Code | NCA | CNN | 无监督 | 2021 |
ACGAN | A generative adversarial network with adaptive constraints for multi-focus image fusion | Paper | Code | ICCV | GAN | 有监督 | 2021 |
FuseGAN | Learning to fuse multi-focus image via conditional generative adversarial network | Paper | TIP | GAN | 有监督 | 2020 | |
D2FMIF | Depth-Distilled Multi-focus Image Fusion | Paper | TMM | CNN | 有监督 | 2019 | |
SESF-Fuse | SESF-Fuse: an unsupervised deep model for multi-focus image fusion | Paper | Code | NCAA | CNN | 有监督 | 2020 |
MFF-GAN | MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion | Paper | Code | InFus | GAN | 无监督 | 2021 |
MFIF-GAN | MFIF-GAN: A new generative adversarial network for multi-focus image fusion | Paper | Code | SPIC | GAN | 有监督 | 2021 |
方法 | 标题 | 论文 | 代码 | 发表期刊或会议 | 基础框架 | 监督范式 | 年份 |
---|---|---|---|---|---|---|---|
PNN | Pansharpening by Convolutional Neural Networks | Paper | Code | RS | CNN | 有监督 | 2016 |
PanNet | PanNet: A deep network architecture for pan-sharpening | Paper | Code | PanNet | CNN | 有监督 | 2017 |
TFNet | Remote sensing image fusion based on two-stream fusion network | Paper | Code | TFNet | CNN | 有监督 | 2020 |
BKL | Unsupervised Blur Kernel Learning for Pansharpening | Paper | IGARSS | CNN | 无监督 | 2020 | |
Pan-GAN | Pan-GAN: An unsupervised pan-sharpening method for remote sensing image fusion | Paper | Code | InFus | GAN | 无监督 | 2020 |
UCNN | Pansharpening via Unsupervised Convolutional Neural Networks | Paper | JSTARS | CNN | 无监督 | 2020 | |
UPSNet | UPSNet: Unsupervised Pan-Sharpening Network With Registration Learning Between Panchromatic and Multi-Spectral Images | Paper | ACCESS | CNN | 无监督 | 2020 | |
GPPNN | Deep Gradient Projection Networks for Pan-sharpening | Paper | Code | CVPR | CNN | 有监督 | 2021 |
GTP-PNet | GTP-PNet: A residual learning network based on gradient transformation prior for pansharpening | Paper | Code | ISPRS | CNN | 有监督 | 2021 |
HMCNN | Pan-Sharpening Via High-Pass Modification Convolutional Neural Network | Paper | Code | ICIP | CNN | 有监督 | 2021 |
SDPNet | SDPNet: A Deep Network for Pan-Sharpening With Enhanced Information Representation | Paper | Code | TGRS | CNN | 有监督 | 2021 |
SIPSA-Net | SIPSA-Net: Shift-Invariant Pan Sharpening with Moving Object Alignment for Satellite Imagery | Paper | Code | CVPR | CNN | 有监督 | 2021 |
SRPPNN | Super-resolution-guided progressive pansharpening based on a deep convolutional neural network | Paper | Code | TGRS | CNN | 有监督 | 2021 |
PSGAN | PSGAN: A generative adversarial network for remote sensing image pan-sharpening | Paper | Code | TGRS | GAN | 有监督 | 2021 |
MDCNN | MDCNN: multispectral pansharpening based on a multiscale dilated convolutional neural network | Paper | JRS | CNN | 有监督 | 2021 | |
LDP-Net | LDP-Net: An Unsupervised Pansharpening Network Based on Learnable Degradation Processes | Paper | Code | Arxiv | CNN | 无监督 | 2021 |
DIGAN | Pansharpening approach via two-stream detail injection based on relativistic generative adversarial networks | Paper | ESA | GAN | 有监督 | 2022 | |
DPFN | A Dual-Path Fusion Network for Pan-Sharpening | Paper | Code | TGRS | CNN | 有监督 | 2022 |
MSGAN | An Unsupervised Multi-scale Generative Adversarial Network for Remote Sensing Image Pan-Sharpening | Paper | ICMM | GAN | 无监督 | 2022 | |
UCGAN | Unsupervised Cycle-Consistent Generative Adversarial Networks for Pan Sharpening | Paper | Code | TGRS | GAN | 无监督 | 2022 |
D2TNet | A ConvLSTM Network with Dual-direction Transfer for Pan-sharpening | Paper | Code | TGRS | CNN | 有监督 | 2022 |
P2Sharpen | P2Sharpen: A progressive pansharpening network with deep spectral transformation | Paper | Code | INFFus | CNN | 有监督 | 2023 |
方法 | 标题 | 论文 | 代码 | 发表期刊或会议 | 基础框架 | 监督范式 | 年份 |
---|---|---|---|---|---|---|---|
IFCNN | IFCNN: A general image fusion framework based on convolutional neural network | Paper | Code | InFus | CNN | 有监督 | 2020 |
FusionDN | FusionDN: A Unified Densely Connected Network for Image Fusion | Paper | Code | AAAI | CNN | 无监督 | 2020 |
PMGI | Rethinking the Image Fusion: A Fast Unified Image Fusion Network based on Proportional Maintenance of Gradient and Intensity | Paper | Code | AAAI | CNN | 无监督 | 2020 |
CU-Net | Deep Convolutional Neural Network for Multi-Modal Image Restoration and Fusion | Paper | Code | TPAMI | CNN | 有监督 | 2021 |
SDNet | SDNet: A Versatile Squeeze-and-Decomposition Network for Real-Time Image Fusion | Paper | Code | IJCV | CNN | 无监督 | 2021 |
DIF-Net | Unsupervised Deep Image Fusion With Structure Tensor Representations | Paper | Code | TIP | CNN | 无监督 | 2021 |
IFSepR | IFSepR: A general framework for image fusion based on separate representation learning | Paper | TMM | AE | 自监督 | 2021 | |
MTOE | Multiple Task-Oriented Encoders for Unified Image Fusion | Paper | ICME | CNN | 无监督 | 2021 | |
U2Fusion | U2Fusion: A Unified Unsupervised Image Fusion Network | Paper | Code | TPAMI | CNN | 无监督 | 2022 |
SwinFusion | SwinFusion: Cross-domain Long-range Learning for General Image Fusion via Swin Transformer | Paper | Code | JAS | Transformer | 无监督 | 2022 |
DeFusion | Fusion from Decomposition: A Self-Supervised Decomposition Approach for Image Fusion | Paper | Code | ECCV | CNN | 无监督 | 2022 |
UIFGAN | UIFGAN: An unsupervised continual-learning generative adversarial network for unified image fusion | Paper | Code | INFFus | GAN | 无监督 | 2023 |
标题 | 论文 | 代码 | 发表期刊或会议 | 年份 |
---|---|---|---|---|
A review of remote sensing image fusion methods | Paper | InFus | 2016 | |
Pixel-level image fusion: A survey of the state of the art | Paper | InFus | 2017 | |
Deep learning for pixel-level image fusion: Recent advances and future prospects | Paper | InFus | 2018 | |
Infrared and visible image fusion methods and applications: A survey | Paper | InFus | 2019 | |
Multi-focu image fusion: A Survey of the state of the art | Paper | InFus | 2020 | |
Image fusion meets deep learning: A survey and perspective | Paper | InFus | 2021 | |
Deep Learning-based Multi-focus Image Fusion: A Survey and A Comparative Study | Paper | Code | TPAMI | 2021 |
Benchmarking and comparing multi-exposure image fusion algorithms | Paper | Code | InFus | 2021 |
Current advances and future perspectives of image fusion: A comprehensive review | Paper | Code | InFus | 2023 |
通用评估指标位于:https://github.com/Linfeng-Tang/Image-Fusion/tree/main/General%20Evaluation%20Metric or https://github.com/Linfeng-Tang/Evaluation-for-Image-Fusion
如果我们的总结对你有所帮助, 请引用以下论文:
@article{Tang2022Survey,
title={Deep learning-based image fusion: A survey},
author={Tang, Linfeng and Zhang, Hao and Xu, Han and Ma, Jiayi},
journal={Journal of Image and Graphics}
}
@article{Ma2022SwinFusion,
title={SwinFusion: Cross-domain Long-range Learning for General Image Fusion via Swin Transformer},
author={Ma, Jiayi and Tang, Linfeng and Fan, Fan and Huang, Jun and Mei, Xiaoguang and Ma, Yong},
journal={IEEE/CAA Journal of Automatica Sinica},
volume={9},
number={7},
pages={1200--1217},
year={2022},
publisher={IEEE}
}
@article{TANG2022SeAFusion,
title = {Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network},
author = {Linfeng Tang and Jiteng Yuan and Jiayi Ma},
journal = {Information Fusion},
volume = {82},
pages = {28-42},
year = {2022},
issn = {1566-2535},
publisher={Elsevier}
}
@article{Tang2022PIAFusion,
title={PIAFusion: A progressive infrared and visible image fusion network based on illumination aware},
author={Tang, Linfeng and Yuan, Jiteng and Zhang, Hao and Jiang, Xingyu and Ma, Jiayi},
journal={Information Fusion},
volume = {83-84},
pages = {79-92},
year = {2022},
issn = {1566-2535},
publisher={Elsevier}
}
@article{Tang2022DIVFusion,
title={DIVFusion: Darkness-free infrared and visible image fusion},
author={Tang, Linfeng and Xiang, Xinyu and Zhang, Hao and Gong, Meiqi and Ma, Jiayi},
journal={Information Fusion},
volume = {91},
pages = {477-493},
year = {2023},
publisher={Elsevier}
}
@article{ma2021STDFusionNet,
title={STDFusionNet: An Infrared and Visible Image Fusion Network Based on Salient Target Detection},
author={Jiayi Ma, Linfeng Tang, Meilong Xu, Hao Zhang, and Guobao Xiao},
journal={IEEE Transactions on Instrumentation and Measurement},
year={2021},
volume={70},
number={},
pages={1-13},
doi={10.1109/TIM.2021.3075747},
publisher={IEEE}
}
如果有任何问题请联系:linfeng0419@gmail.com