Weisheng LiSiqin FengHua-Ping GuanZiwei ZhanGong Cheng
We present a spatiotemporal saliency detection method for videos. In contrast to previous methods that focus on exploiting the different underlying saliency cues or ignore motion information, the proposed method aims to use both appearance information based on spatial edges and spatial color saliency and motion information based on temporal motion boundaries as indicators of foreground object locations. Spatial color saliency is obtained from the fusion of three color features: color edge connectivity, color rarity, and color compactness. Subsequently, we further smooth the color saliency to eliminate background noises and to further boost the detection accuracy. Then, we propose a strategy-based low-level saliency fusion that guarantees to complementarily employ the smoothed color saliency, spatial edges, and temporal motion boundaries clues toward producing high-accuracy low-level saliency. Subsequently, we generate framewise spatiotemporal saliency maps using a geodesic distance from the low-level saliency. Subsequently, high-quality results are obtained through the geodesic distance to the background area in the subsequent frames. Extensive quantitative and qualitative experiments on three public video datasets demonstrate the superiority of the proposed method over the state-of-the-art algorithms.
Wenguan WangJianbing ShenFatih Porikli
Hangke SongZhi LiuHuan DuGuangling Sun
Manchuru SreenavyaChandra Mohan Reddy Sivappagari
Hongfa WenXiaofei ZhouYaoqi SunJiyong ZhangChenggang Yan
AlokThakur AlokThakurNiraj Tiwari