JOURNAL ARTICLE

Light-Weight Attention Semantic Segmentation Network for High-Resolution Remote Sensing Images

Abstract

Semantic segmentation of high-resolution remote sensing (HRRS) images becomes more and more important at present. Popular approaches use deep learning to solve this task, which depends on a large amount of labeled data and powerful computing resources. When computing resources or the labeled data are insufficient, their performance will be severely degraded. To deal with this problem, we proposed a light-weight network with attention modules for semantic segmentation of HRRS images. The depth and width of the network are designed, which has a small number of parameters to ensure the efficiency of training. The network adopts an encoder-decoder architecture. The feature maps of different scales from the encoder are concatenated together after resizing to carry out multi-scale feature fusion. To capture the global semantic information from the context, the attention mechanism is employed in the decoder. With one GTX2080Ti GPU and only 15 MB parameters the model owns, our light-weight network has quality results evaluated on ISPRS Vaihingen Dataset with fewer parameters compared to other popular approaches.

Keywords:
Computer science Segmentation Encoder Context (archaeology) Feature (linguistics) Artificial intelligence Encoding (memory) Image segmentation Deep learning Task (project management) Image resolution Pattern recognition (psychology) Data mining Computer vision

Metrics

13
Cited By
1.05
FWCI (Field Weighted Citation Impact)
16
Refs
0.79
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Advanced Image and Video Retrieval Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Remote-Sensing Image Classification
Physical Sciences →  Engineering →  Media Technology
© 2026 ScienceGate Book Chapters — All rights reserved.