The pansharpening method combines complementary features from panchromatic (PAN) images and multispectral (MS) images to provide high-resolution MS images. Therefore, how to extract the features completely and reconstruct the image with high quality is the key link to obtain the ideal fusion image. We propose an attention-based and staged iterative network (ASIN) framework, which considers each subnetwork of the iterative network as a multistage process of pansharpening and carries out feature extraction and image reconstruction in each stage. Using the advantages of an iterative network for cross-stage depth features for the hierarchical extraction of refined features of MS images and PAN images for image reconstruction, we use the large kernel attention (LKA) module and the cascaded asymmetric coupling representation module to build the framework for feature extraction and the attention fusion module (AFM) to fuse the features of PAN and MS in the image reconstruction stage. LKA has channel and spatial adaptability, as well as strong long-range dependency establishment capability, which makes the feature extraction more complete. Asymmetric coupled representation module (ACRM) outputs refined spectral and spatial features by learning the hybrid correlation of MS and PAN images. AFM effectively utilizes the spectral and spatial features of the input, enabling the network to reduce information loss and retain important information. On the QuickBird (QB), WorldView-2 (WV2), and Gaofen-2 (GF-2) datasets, the superior performance of our method over the contrasting methods is demonstrated by quantitative comparison and qualitative analysis.
Byung Min ChungJ. JungYih‐Shyh ChiouMu-Jan ShihFuan Tsai
Zhixuan LiJinjiang LiFan ZhangLinwei Fan
Hui LiuLiangfeng DengYibo DouXiwu ZhongYurong Qian
Bin LuoMuhammad Murtaza KhanThibaut BienvenuJocelyn ChanussotLiangpei Zhang