JOURNAL ARTICLE

Dual-branch visible and infrared image fusion transformer

Abstract

The process of combining features from two images of different sources to generate a new image is called image fusion. In order to adapt to different application scenarios, deep learning was widely used. However, existing fusion networks focued on the extraction of local information, neglected the long-term dependencies. In order to improve the defect, a fusion network based on Transformer was proposed. To accommodate our experimental equipment, we made some modifications to Transformer. A dual-branch autoencoder network was designed with detail and semantic branches, the fusion layer consists of CNN and Transformer, and the decoder reconstructs the features to get the fused image. A new loss function was proposed to train the network. Based on the results, an infrared feature compensation network was designed to enhance the fusion effect. In several metrics that we focus on, we compared with several other algorithms. As the experiments on some datasets, our method had improvement on SCD, SSIM and MS-SSIM metrics, and was basically equal to other algorithms on saliency-based structural similarity, weighted quality assessment, and dge-based structural similarity. From the experimental results, we can see that our method was feasible.

Keywords:
Computer science Artificial intelligence Autoencoder Transformer Fusion Image fusion Pattern recognition (psychology) Feature extraction Encoder Deep learning Computer vision Image (mathematics) Engineering

Metrics

1
Cited By
0.22
FWCI (Field Weighted Citation Impact)
18
Refs
0.50
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Image Fusion Techniques
Physical Sciences →  Engineering →  Media Technology
Remote-Sensing Image Classification
Physical Sciences →  Engineering →  Media Technology
Visual Attention and Saliency Detection
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.