Xiangyu ZengMingzhu XuYijun HuHaoyu TangYupeng HuLiqiang Nie
In recent years, the task of salient object detection in optical remote sensing images (RSI-SOD) has received extensive attention. Benefiting from the development of deep learning, much progress has been made in RSI-SOD field. However, existing methods still face challenges in addressing various issues present in optical RSI, including uncertain numbers of salient objects, cluttered backgrounds, and interference from shadows. To address these challenges, we propose a novel approach, Adaptive Edge-aware Semantic Interaction Network (AESINet) for efficient salient object detection. Specifically, to improve the extraction of complex edge information, we design a Local Detail Aggregation Module (LDAM). This module can adaptively enhance the edge information of salient objects by leveraging our proposed difference perception mechanism. Notably, our difference perception mechanism is a novel edge enhancement method without the supervision of edge groundtruth. Additionally, to accurately locate salient objects of varying numbers and scales, we design a Multi-scale Feature Enhancement Module (MFEM), which effectively captures and utilizes multi-scale information. Moreover, we design the Deep Semantic Interaction Module (DSIM) to identify salient objects amidst cluttered backgrounds and effectively mitigate the interference of shadows. We conduct extensive experiments on three well-established optical RSI datasets and the results demonstrate that our proposed model outperforms 14 state-of-the-art methods. All codes and detection results are available at https://github.com/xumingzhu989/AESINet-TGRS.
Yanzhao WangYanping YaoTongchi ZhouZhongyun LiuYan LiLong Zhu
Xiaofei ZhouKunye ShenZhi LiuChen GongJiyong ZhangChenggang Yan
Bin WanRunmin CongXiaofei ZhouHao FangYaoqi SunSam Kwong
Yifei TengZhengyi GuoYaqian WangLiejun WangPanpan Zheng