The objective of infrared and visible image fusion is to integrate the prominent targets from the infrared image with the background information from the visible image into a single image. Many deep learning-based approaches have been employed in the field of image fusion. However, most methods have not been able to sufficiently extract the distinct features of images from different modalities, resulting in fusion outcomes that lean towards one modality while losing information from the other. To address this, we have developed a novel method based on generative adversarial network for infrared and visible image fusion. We have designed two sets of generative adversarial networks. The first set is utilized for preliminary feature extraction, generating intermediate results and discriminating features with the infrared image. The second set is employed for deep feature extraction, generating the fused image and discriminating features with the visible image. Through the adversarial training of the two sets of generators and discriminators, we ensure the comprehensive extraction of diverse features from images of various modalities. Extensive qualitative and quantitative experimental results indicate that our approach can retain more information from the source images. Compared to seven other prominent methods, our approach achieves superior quality.
Huabing ZhouWei WuYanduo ZhangJiayi MaHaibin Ling
Shuying HuangZixiang SongYong YangWeiguo WanXiangkai Kong
Jiayi MaWei YuPengwei LiangChang LiJunjun Jiang