Zhixiang XueXuchu YuPengqiang ZhangXiong TanAnzhu YuBing Liu
With the rapid development of remote sensing data acquisition technology, there are multimodal images over the same observed scenes. These multimodal remote sensing images could provide complementary valuable information for land cover classification. In this article, we propose a novel self-supervised feature learning and few-shot classification model for multimodal remote sensing images, called S2FL. Specifically, a contrastive learning architecture is investigated to learn spatial feature representations from very high resolution (VHR) image. And the spectral features from hyperspectral data are integrated with learned spatial features for few-shot land cover classification. Classification experiments are conducted on a widely-used dataset, i.e., Houston 2018, to verify the effectiveness and superiority of the proposed S2FL model compared with several state-of-the-art baseline approaches.
Zhixiang XueBing LiuAnzhu YuXuchu YuPengqiang ZhangXiong Tan
Zhixiang XueYU XuchuAnzhu YuBing LiuPengqiang ZhangShentong Wu
Wei ShenLingmin HeYuting ZhengHengbin Zhang
Rui ZhangYixin YangYang LiJiabao WangZhuang MiaoHang LiZiqi Wang
Jianzhao LiMaoguo GongHuilin LiuYourun ZhangMingyang ZhangYue Wu