Yong ChenJiaojiao ZHANGWang Zhen
Abstract:To solve the loss of detail information and insufficient feature extraction in the fusion results of infrared and visible light images,a deep learning network model for infrared and visible light image fusion with multi-scale densely connected attention is proposed.First,multi-scale convolution is designed to ex• tract information of different scales in infrared and visible light images to increase the feature extraction range in the receptive field and overcome the problem of insufficient feature extraction at a single scale.Then,feature extraction is enhanced through a densely connected network,and an attention mechanism is introduced at the end of the encoding sub-network to closely connect the global context information and en• hance the ability to focus on important feature information in infrared and visible light images.Finally,the fully convolutional layers that compose the decoding network are used to reconstruct the fused image.This study selects six objective evaluation indicators of image fusion,and the fusion experiments conducted on
Yong WangXueyuan ZhaoJianfei PuLulu ZhangDuoqian Miao
Yumin GongChunmei ChenYifan Chu
Dongdong XuNing ZhangYuxi ZhangZheng LiZhikang ZhaoYongcheng Wang
Fuquan LiYonghui ZhouYanli ChenJie LiZhicheng DongMian Tan