Saliency detection for images has been studied for many years, for which a lot of methods have been designed. In saliency detection, background priors which are often regarded as pseudo-background are effective clues to find salient objects in images. Although image boundary is commonly used background priors, it doesn't work well for images of complex scenes and videos. In this paper, we explore how to identify the background priors for a video and propose a saliency based method to detect the visual objects by using background priors. For a video, we integrate multiple pairs of SIFT flows from long-range frames and a bidirectional consistency propagation is conducted to obtain the accurate and sufficient temporal background priors, which are combined with spatial background priors to generate spatiotemporal background priors. Next, a novel dual-graph based structure using spatiotemporal background priors is put forward in computation of saliency maps, fully taking advantage of appearance and motion information in videos. Experimental results on different challenging datasets show that the proposed method robustly and accurately detect the video objects in both simple and complex scenes and achieve better performance compared with other state-of-the-art video saliency models.
Tao XiYuming FangWeisi LinYabin Zhang
Trung-Nghia LeAkihiro Sugimoto
Yanbang ZhangJunwei HanLei Guo