Jiayun LiuShengsheng WangXiaowei HouWenzhuo Song
Extracting buildings from high spatial resolution remote sensing imagery automatically is considered as an important task in many applications. The huge differences in the appearance and spatial distribution of man-made buildings make it a challenging issue. In recent years, convolutional neural networks (CNNs) have made remarkable progress in computer vision. Many published papers have applied deep CNNs to remote sensing successfully. However, most contributions require complex structure and a big number of parameters which lead to redundant computations, and limit the application of the models. To address these issues, we propose a deep residual learning serial segmentation network called SSNet, an end-to-end semantic segmentation network, to extract buildings from high spatial resolution remote sensing imagery. SSNet reduces the network complexity and computations by drawing on the advantages of U-Net and ResNet, and improves the detection accuracy. The SSNet is extensively evaluated on two large remote sensing datasets covering a wide range of urban settlement appearances. The comparison of SSNet and state-of-the-art algorithms demonstrates the effectiveness and superiority of the proposed model for building extraction.
Yaning YiZhijie ZhangWanchang ZhangChuanrong ZhangWeidong LiTian Zhao
Jianwei YueYan WangJie PanHaojian LiangShaohua WangQuanyi Liu
Minghong HuJiatian LiA XiaohuiYunfei ZhaoMei LüWen Li
Sherrie WangWilliam ChenSang Michael XieGeorge AzzariDavid B. Lobell