Compared with images in general scenes, multi-modal medical images contain more detailed features and require higher integrity of features. Therefore, when fusing multi-modal medical images, features at different scales need to be accurately extracted to ensure the above requirements, which cannot be done in general convolutional neural network (CNN). To solve this problem, a convolutional neural network based on multi-scale feature fusion is proposed to improve the fusion quality of multi-modal medical image. Specifically, the proposed network consists of two trunks and three branches to extract features at different scales. The trunks and branches are connected by the fusion modules (FM) to realize the fusion of multi-scale features. Finally, the fused multi-scale features are extracted by multiple convolutions and concatenated with the features of the trunks to reconstruct and generate the fused image. The results of the objective and subjective evaluation show that the proposed method is advanced in most of the indexes compared with other state-of-the-art methods.
Hafiz Tayyab MustafaJie YangMasoumeh Zareapoor
Tom Michael ShibuNidhi MadanNirmala ParamanandhamAakash KumarAshwin Santosh
P. ManeeshaTripty SinghRavi C. NayarShiv Kumar