JOURNAL ARTICLE

Local and global semantic relationship network for remote sensing scene classification

Abstract

Remote sensing scene (RSS) classification is an important research topic for high-resolution HR remote sensing image understanding. Recently, many approaches have been presented for the task, including data-driven and machine learning methods. However, accurately identifying scenes from HR remote sensing images remains challenging since it is difficult to effectively extract multiscale and key features from the complex geometrical structures and spatial patterns of large-scale ground object. In this paper, we propose a novel local and global semantic relationship network (LGSRNet) for RSS classification. ConvNeXt-T with the same performance as the local vision Swin Transformer is adopted to extract feature map with powerful discriminative ability. Meanwhile, the semantic relation learning (SRL) with graph convolutional networks is presented to further learn semantic relationships between labels of RSS categories within spatial domain. Subsequently, cosine similarity is adopted to incorporate the ConvNeXt-T and SRL. Extensive experiments on two attribute-classification datasets (AID and NWPU-RESISC45) demonstrate that LGSRNet outperforms several other state-of-the-art methods.

Keywords:
Computer science Discriminative model RSS Artificial intelligence Feature extraction Semantic feature Graph Pattern recognition (psychology) Data mining Remote sensing Geography

Metrics

1
Cited By
0.22
FWCI (Field Weighted Citation Impact)
19
Refs
0.49
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Remote-Sensing Image Classification
Physical Sciences →  Engineering →  Media Technology
Automated Road and Building Extraction
Physical Sciences →  Engineering →  Ocean Engineering
Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence
© 2026 ScienceGate Book Chapters — All rights reserved.