In this paper, we aim to improve the transfer learning ability of 2D convolutional neural networks (CNNs) for building extraction from optical imagery and digital surface models (DSMs) using a 2D-3D co-learning framework. Unlabeled target domain data are incorporated as unlabeled training data pairs to optimize the training procedure. Our framework adaptively transfers unsupervised mutual information between the 2D and 3D modality (i.e., DSM-derived point clouds) during the training phase via a soft connection, utilizing a predefined loss function. Experimental results from a spaceborne-to-airborne cross-domain case demonstrate that the framework we present can quantitatively and qualitatively improve the testing results for building extraction from single-modality optical images.
Fayong ZhangKejun LiuYuanyuan LiuChaofan WangWujie ZhouHongyan ZhangLizhe Wang
Jie ChenPeien HeJingru ZhuYa GuoGeng SunMin DengHaifeng Li
Xingliang HuangKaiqiang ChenZhirui WangXian Sun