Zhengyun ZhaoQingpeng YangShangqin YangJun Wang
Abstract Depth modal features can provide complementary information for salient object detection (SOD). Most of the existing RGB-D SOD methods focus on fully combining RGB and Depth modal features without distinguishing them. In this paper, we propose a new depth guided cross-modal residual adaptive network for RGB-D SOD. We use two independent resnet-50 to extract the features of the two modes respectively. Then the cross-modal channel-wise refinement module is designed to obtain complementary modal information. We design a crossmodal guided module to make complementary modal information guide RGB image feature extraction. Finally, the residual adaptive selection module is used to enhance the spatial mutual concerns between the two modal features to achieve multimodal information fusion. Experimental results show that our method can achieve a more reasonable fusion state of RGB and Depth, and verify the effectiveness of our final saliency model.
Zhengyun ZhaoZiqing HuangXiuli ChaiJun Wang
Shuaihui WangFengyi JiangBoqian Xu
Hongbo BiJiayuan ZhangRanwan WuYuyu TongWei Jin
Fen XiaoZhengdong PuJiaqi ChenXieping Gao