Semantic segmentation of remote sensing images is a critical task in computer vision, yet it has often been overlooked in the context of the images themselves. Given the high similarity between segmentation targets and the background in satellite remote sensing images, conventional deep networks tend to lose vital boundary features and contextual information, which are pivotal for accurate segmentation. To address this issue, I enhance the decoupled network architecture proposed by my predecessors. The improved network, named SplitNet, retrieves edge feature information from a shallow network and global features from the deep network applied to downsampled images. In a novel approach, I introduce a feature map fusion method that integrates edge, body, and global features, sharpening the network's focus on segmenting edge location features of the target. Our experiments demonstrate that SplitNet achieves substantial results on the DeepGlobe land classification dataset.
Chu HeShenglin LiDehui XiongPeizhang FangMingsheng Liao
付辉敬田铮田铮中国科学院遥感应用研究所遥感科学国家重点实验室,北京,100101
Yongxue LiuManchun LiMAO Liang 南京大学城市与资源学系 江苏南京210093
Mohammad D. HossainDongmei Chen