Anshita KesharwaniKaptan SinghAmit Saxena
Color medical images from multiple modalities need to be fused to provide a more informative representation for detailed diagnosis. Individual fusion methods are inefficient when dealing with multiple modalities. The primary objective of this paper is to design a hybrid combination of fusion methodologies to offer medical analysts multiple information perspectives. This paper presents three main contributions. First, two multimodal images are used for feature extraction with the I_MWGF method. Since using only the MWGF method can result in the loss of redundant information, a modified Weighted Gradient Fusion (MWGF) approach is proposed. This approach incorporates contrast enhancement through the scaling of the DC coefficient, generating I_(SDC) as multi-focus features in the compressed DCT domain. This modification helps eliminate poor contrast issues. To provide more detailed features, the outcomes of MWGF and SDC are fused together using pixel-level averaging and wavelet-based image fusion rules. The DC coefficient of the luminance component is proposed to be scaled in the LAB color space, with the Twicing function used as the scaling function, enhancing entropy and brightness features. The inverse DWT is employed to reproduce the true color image features for diagnosis. The method is quantitatively tested on various multimodal medical image pairs, including MRI, CT scans, and PET scans from different environments.
Nahed TawfikHeba A. ElnemrMahmoud FakhrMoawad I. DessoukyFathi E. Abd El‐Samie
Majumdar, SudiptaJayant Bharadwaj
Sudipta MajumdarJayant Bharadwaj