Wansong ZHANGWenzhong YangYabo YinDanny ChenXianfeng WangHu Zhao
Semantic segmentation of remote sensing images has important application value in fields such as farmland anomaly detection and urban planning. However, the low-level features extracted by deep neural network models retain rich spatial detail information while introducing redundancy and noise. The significant differences in the semantic level and spatial distribution of high-level and low-level features pose challenges to their effective fusion. To this end, we propose a Multi-Feature Enhancement Fusion Network that improves local feature expression and global semantic modelling ability by fusing edge information and semantic information. The Edge Enhancement Module used traditional edge detection operators to enhance the details of edge features. The Multi-Feature Fusion Module effectively integrates semantic and edge features to enhance the ability to express fine-grained information. The Local-Global Feature Enhancement Module hierarchically establishes local details and global context information, and the Multi-Level Fusion segmentation head integrates the features of different levels to utilise both shallow spatial details and deep semantic information fully. Following this, our extensive experiments on three publicly available datasets demonstrate that the proposed model outperforms state-of-the-art methods. The code will be published on: https://github.com/zwsbh/MFEF.
Feiting WangYuan ZhangQiongqiong HuYu Zhu
Zhisheng LieShasha RenQiong Liu