Xufan MiaoNing LiGuangkai SunYuchen BaiLianqing ZhuJi Zhang
During the process of capturing infrared and visible images, the presence of inconsistent illumination poses a challenge that adversely affects the visual quality of the fused paper. In this paper, we propose a novel method for infrared and visible fusion, termed SelectiveFusion, which effectively addresses the issue of inconsistent illumina-tion in the process of capturing infrared and visible light images. This method consists of three key elements: en-coder, fusion strategy, and decoder. Firstly, the source images are fed into the encoder to extract multi-scale deep features. Subsequently, a new fusion strategy is employed to merge features from each scale. In our fusion strate-gy, we develop a selective channel attention fusion module that allows for selective channel weighting of the dif-ferent input features from infrared and visible image. Finally, the fused features are subjected to feature recon-struction through a nested decoder. Additionally, we formulate a novel loss function to guide the training of the fusion network. Our experiments were conducted on publicly available datasets, and compared to existing methods, both quantitatively and qualitatively, demonstrating the effectiveness and versatility of SelectiveFusion. Our code is publicly available at [ https://github.com/ISCLab-Bistu/SelectiveFusion ].
Jiahui ZhuQingyu DouLihua JianKai LiuFarhan HussainXiaomin Yang
Yinghan CuiHuiqian DuWenbo Mei
Jing LiHongtao HuoChang LiRen-Hua WangChenhong SuiZhao Liu
Yang LiJixiao WangZhuang MiaoJiabao Wang