Jing ZhangYuchao DaiFatih PorikliMingyi He
Salient object detection is a challenging task in complex compositions depicting multiple objects of different scales. Albeit the recent progress thanks to the convolutional neural networks, the state-of-the-art salient object detection methods still fall short to handle such real-life scenarios. In this paper, we propose a new method called MP-SOD that exploits both Multi-Scale feature fusion and Pyramid spatial pooling to detect salient object regions in varying sizes. Our framework consists of a front-end network and two multi-scale fusion modules. The front-end network learns an end-to-end mapping from the input image to a saliency map, where a pyramid spatial pooling is incorporated to aggregate rich context information from different spatial receptive fields. The multi-scale fusion module integrates saliency cues across different layers, that is from low-level detail patterns to high-level semantic information by concatenating feature maps, to segment out salient objects with multiple scales. Extensive experimental results on eight benchmark datasets demonstrate the superior performance of our method compared with existing methods.
Abdelhafid DakhiaTiantian WangHuchuan Lu
Yu QiuYun LiuYanan ChenJianwen ZhangJinchao ZhuJing Xu
MengHuai XiaoYue WangYa WangJun Huang
Yuan ZhuYao JinLiu ChengjunTang Wuxuan
Caijuan ShiWeiming ZhangChangyu DuanHouru Chen