Zhu HanCe ZhangLianru GaoZhiqiang ZengMichael K. NgBing ZhangJocelyn Chanussot
Cross-scene image classification aims to transfer prior knowledge of ground\nmaterials to annotate regions with different distributions and reduce\nhand-crafted cost in the field of remote sensing. However, existing approaches\nfocus on single-source domain generalization to unseen target domains, and are\neasily confused by large real-world domain shifts due to the limited training\ninformation and insufficient diversity modeling capacity. To address this gap,\nwe propose a novel multi-source collaborative domain generalization framework\n(MS-CDG) based on homogeneity and heterogeneity characteristics of multi-source\nremote sensing data, which considers data-aware adversarial augmentation and\nmodel-aware multi-level diversification simultaneously to enhance cross-scene\ngeneralization performance. The data-aware adversarial augmentation adopts an\nadversary neural network with semantic guide to generate MS samples by\nadaptively learning realistic channel and distribution changes across domains.\nIn views of cross-domain and intra-domain modeling, the model-aware\ndiversification transforms the shared spatial-channel features of MS data into\nthe class-wise prototype and kernel mixture module, to address domain\ndiscrepancies and cluster different classes effectively. Finally, the joint\nclassification of original and augmented MS samples is employed by introducing\na distribution consistency alignment to increase model diversity and ensure\nbetter domain-invariant representation learning. Extensive experiments on three\npublic MS remote sensing datasets demonstrate the superior performance of the\nproposed method when benchmarked with the state-of-the-art methods.\n
Xiaoqiang LuTengfei GongXiangtao Zheng
Yunhao GaoMengmeng ZhangWei LiRan Tao
Yunxiao QiJunping ZhangDongyang LiuYe Zhang
Baodi LiuWen-Yang XieJie MengYe LiYanjiang Wang