Multimodal medical image fusion is vital for extracting complementary information and generating comprehensive images in clinical applications. However, existing deep learning-based fusion approaches face challenges in effectively utilizing frequency-domain information, designing appropriate integration strategies and modelling long-range context correlation. To address these issues, we propose a novel unsupervised multimodal medical image fusion method called Multiscale Fourier Attention and Detail-Aware Fusion (MFA-DAF). Our approach employs a multiscale Fourier attention encoder to extract rich features, followed by a detail-aware fusion strategy for comprehensive integration. The fusion image is obtained using a nested connected Fourier attention decoder. We adopt a two-stage training strategy and design new loss functions for each stage. Experiment results demonstrate that our model outperforms other state of the art methods, producing fused images with enhanced texture information and superior visual quality.
Lamei WangXinyu XieYun YangDongping XiongHong ZhouBin YangKok Lay TeoBingo Wing‐Kuen LingXiaozhi Zhang
Guocheng YangLeiting ChenHang Qiu
Rui HeYang XuYou ZhengZhiming ZhouChaoyang ZhouWenlong SongJiayi YuDajing Guo
Liang ZhouZhidong JiaoYuchun HeXiaomin ZhuWenxian SunX. Rong LiYang Zou
Alpha Alimamy KamaraShiwen HeAbdul Joseph Fofanah