JOURNAL ARTICLE

DGCFNet: Dual Global Context Fusion Network for remote sensing image semantic segmentation

Yuan LiaoTongchi ZhouLu LiJinming LiJuntong ShenAskar Hamdulla

Year: 2025 Journal:   PeerJ Computer Science Vol: 11 Pages: e2786-e2786   Publisher: PeerJ, Inc.

Abstract

The semantic segmentation task of remote sensing images often faces various challenges such as complex backgrounds, high inter-class similarity, and significant differences in intra-class visual attributes. Therefore, segmentation models need to capture both rich local information and long-distance contextual information to overcome these challenges. Although convolutional neural networks (CNNs) have strong capabilities in extracting local information, they are limited in establishing long-range dependencies due to the inherent limitations of convolution. While Transformer can extract long-range contextual information through multi-head self attention mechanism, which has significant advantages in capturing global feature dependencies. To achieve high-precision semantic segmentation of remote sensing images, this article proposes a novel remote sensing image semantic segmentation network, named the Dual Global Context Fusion Network (DGCFNet), which is based on an encoder-decoder structure and integrates the advantages of CNN in capturing local information and Transformer in establishing remote contextual information. Specifically, to further enhance the ability of Transformer in modeling global context, a dual-branch global extraction module is proposed, in which the global compensation branch can not only supplement global information but also preserve local information. In addition, to increase the attention to salient regions, a cross-level information interaction module is adopted to enhance the correlation between features at different levels. Finally, to optimize the continuity and consistency of segmentation results, a feature interaction guided module is used to adaptively fuse information from intra layer and inter layer. Extensive experiments on the Vaihingen, Potsdam, and BLU datasets have shown that the proposed DGCFNet method can achieve better segmentation performance, with mIoU reaching 82.20%, 83.84% and 68.87%, respectively.

Keywords:
Computer science Segmentation Artificial intelligence Encoder Mutual information Image segmentation Pattern recognition (psychology) Computer vision

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
73
Refs
0.11
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Advanced Image Fusion Techniques
Physical Sciences →  Engineering →  Media Technology
Advanced Neural Network Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Remote-Sensing Image Classification
Physical Sciences →  Engineering →  Media Technology

Related Documents

JOURNAL ARTICLE

DGLFNet:A Dual-Branch Global-Local Fusion Network for Remote Sensing Image Semantic Segmentation

Guangqi LiJing WangXiaohui YangTao XuYi Sun

Journal:   IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing Year: 2025 Pages: 1-16
JOURNAL ARTICLE

Context Aggregation Network for Remote Sensing Image Semantic Segmentation

Changxing ZhangXiangyu BaiDapeng WangKexin Zhou

Journal:   International Journal of Computational Intelligence and Applications Year: 2024 Vol: 23 (03)
JOURNAL ARTICLE

Stair Fusion Network With Context-Refined Attention for Remote Sensing Image Semantic Segmentation

Jia LiuWenyi HuaWenhua ZhangFang LiuLiang Xiao

Journal:   IEEE Transactions on Geoscience and Remote Sensing Year: 2024 Vol: 62 Pages: 1-17
© 2026 ScienceGate Book Chapters — All rights reserved.