Yunzhi ZhugeGang YangPingping ZhangHuchuan Lu
Fully convolutional networks (FCN) has significantly improved the performance\nof many pixel-labeling tasks, such as semantic segmentation and depth\nestimation. However, it still remains non-trivial to thoroughly utilize the\nmulti-level convolutional feature maps and boundary information for salient\nobject detection. In this paper, we propose a novel FCN framework to integrate\nmulti-level convolutional features recurrently with the guidance of object\nboundary information. First, a deep convolutional network is used to extract\nmulti-level feature maps and separately aggregate them into multiple\nresolutions, which can be used to generate coarse saliency maps. Meanwhile,\nanother boundary information extraction branch is proposed to generate boundary\nfeatures. Finally, an attention-based feature fusion module is designed to fuse\nboundary information into salient regions to achieve accurate boundary\ninference and semantic enhancement. The final saliency maps are the combination\nof the predicted boundary maps and integrated saliency maps, which are more\ncloser to the ground truths. Experiments and analysis on four large-scale\nbenchmarks verify that our framework achieves new state-of-the-art results.\n
Rongguo ZhangXiaomei ZhengLifang WangJing HuXiaojun Liu
Hu HuangPing LiuYanzhao WangTongchi ZhouBoyang QuAimin TaoHao Zhang
Qin WuJianzhe WangZhilei ChaiGuodong Guo
Qiaoning YangJiahao ZhengJuan Chen
Bojian ChenWenbin WuZhezhou LiTengfei HanZhuolei ChenWeihao Zhang