Qingze ZhouQing GuoYu TianLetian Yu
Most existing deep learning-based pansharpening methods fail to accommodate the varying correlations between panchromatic (PAN) and multispectral (MS) images from different sensors, which can lead to poor generalization performance and a lack of universality. To address these issues, we propose a self-supervised pansharpening network constrained by orthogonal space projection prior (SPOP). SPOP employs three input streams: the original PAN, the original MS, and the prior information extracted through the orthogonal space projection (OSP). The OSP exploits the properties of orthogonal vectors to robustly extract the required prior information across diverse sensor datasets. To ensure the comprehensive feature extraction, we have designed a multiscale module, a multiresidual feature extraction module, a dual attention module, and a Densefuse module. In addition, a spatial–spectral joint loss function is designed, utilizing the input PAN and MS images as self-supervised labels. The joint loss is composed of three parts, controlling the network training outcomes from the spatial, the spectral, and the combined spatial–spectral perspectives, respectively. This design better aligns with practical fusion requirements. Subjective and objective evaluation results from experiments confirm that the proposed SPOP surpasses the commonly used pansharpening methods in terms of both the fusion quality and the generalization performance across diverse sensor datasets. These codes can be downloaded online.
Qingping LiYang Xiao-minBingru LiJin Wang
Oluwaseun Joseph AribidoGhassan AlRegibYazeed Alaudah
Mutum Bıdyaranı DeviR. Devanathan
Hu LuTingTing JinHui WeiMichele NappiHu LiShaohua Wan