Guanzhou ChenChanjuan HeTong WangKun ZhuPuyun LiaoXiao‐Dong Zhang
Semantic segmentation is one of the fundamental tasks of pixel-level remote sensing image analysis. Currently, most high-performance semantic segmentation methods are trained in a supervised learning manner. These methods require a large number of image labels as support, but manual annotations are difficult to obtain. To address the problem, we propose an efficient unsupervised remote sensing image segmentation method based on superpixel segmentation and fully convolutional networks (FCNs) in this letter. Our method can achieve pixel-level images segmentation of various scales rapidly without any manual labels or prior knowledge. We use the superpixel segmentation results as synthetic ground truth to guide the gradient descent direction during FCN training. In experiments, our method achieved high performance compared to current unsupervised image segmentation methods on three public datasets. Specifically, our method achieves an Adjusted Mutual Information (AMI) score of 0.2955 on the Gaofen Image Dataset (GID) dataset, while processing each image of size 7200 × 6800 pixels in just 30 seconds.
Fulin HuangZhicheng YangHang ZhouDu ChenA. WongYuchuan GouHan MeiJui-Hsin Lai
Maofan ZhaoQingyan MengLinlin ZhangDie HuYing ZhangMona Allam
Zhen WangShanwen ZhangChuanlei ZhangBuhong Wang