Satyawant KumarAbhishek KumarDong-Gyu Lee
Remotely captured images possess an immense scale and object appearance variability due to the complex scene. It becomes challenging to capture the underlying attributes in the global and local context for their segmentation. Existing networks struggle to capture the inherent features due to the cluttered background. To address these issues, we propose a remote sensing image segmentation network, RSSGLT, for semantic segmentation of remote sensing images. We capture the global and local features by leveraging the benefits of the transformer and convolution mechanisms. RSSGLT is an encoder–decoder design that uses multiscale features. We construct an attention map module (AMM) to generate channelwise attention scores for fusing these features. We construct a global–local transformer block (GLTB) in the decoder network to support learning robust representations during a decoding phase. Furthermore, we designed a feature refinement module (FRM) to refine the fused output of the shallow stage encoder feature and the deepest GLTB feature of the decoder. Experimental findings on the two public datasets show the effectiveness of the proposed RSSGLT.
Xiaohui LiuLei ZhangRui WangXiaoyu LiJiyang XuXiaochen Lu
Yuheng LiuYe WangYifan ZhangShaohui Mei
WEI KanLI LingLIANG ShilinWEN Zongguo
Youda MoHuihui LiXiangling XiaoHuimin ZhaoXiaoyong LiuJin Zhan
Xudong HuPenglin ZhangQi ZhangFeng Yuan