A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase the visual impression of fused images by improving the quality of infrared and visible light picture fusion.The network comprises an encoder module, fusion layer, decoder module, and edge improvement module.The encoder module utilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformer to achieve deep-level co-extraction of local and global features from the original picture.An edge enhancement module (EEM) is created to extract significant edge features.A modal maximum difference fusion strategy is introduced to enhance the adaptive representation of information in various regions of the source image, thereby enhancing the contrast of the fused image.The encoder and the EEM module extract features, which are then combined in the fusion layer to create a fused picture using the decoder.Three datasets were chosen to test the algorithm proposed in this paper.The results of the experiments demonstrate that the network effectively preserves background and detail information in both infrared and visible images, yielding superior outcomes in subjective and objective evaluations.
Bao LiuRuilong HeShuqi LiLe Cao
Tingyu ZhuGang WangYanxiao KangJinyong Chen
Jingyu JiYuefei ZhaoYuhua ZhangChanglong WangXiaolin MaYuandong NiuJiangyi Yao