JOURNAL ARTICLE

Video Saliency Detection Using Spatiotemporal Cues

Yu ChenJing XiaoLiuyi HuDan ChenZhongyuan WangDengshi Li

Year: 2018 Journal:   IEICE Transactions on Information and Systems Vol: E101.D (9)Pages: 2201-2208   Publisher: Institute of Electronics, Information and Communication Engineers

Abstract

Saliency detection for videos has been paid great attention and extensively studied in recent years. However, various visual scene with complicated motions leads to noticeable background noise and non-uniformly highlighting the foreground objects. In this paper, we proposed a video saliency detection model using spatio-temporal cues. In spatial domain, the location of foreground region is utilized as spatial cue to constrain the accumulation of contrast for background regions. In temporal domain, the spatial distribution of motion-similar regions is adopted as temporal cue to further suppress the background noise. Moreover, a backward matching based temporal prediction method is developed to adjust the temporal saliency according to its corresponding prediction from the previous frame, thus enforcing the consistency along time axis. The performance evaluation on several popular benchmark data sets validates that our approach outperforms existing state-of-the-arts.

Keywords:
Computer science Artificial intelligence Benchmark (surveying) Computer vision Noise (video) Contrast (vision) Matching (statistics) Frame (networking) Pattern recognition (psychology) Domain (mathematical analysis) Image (mathematics) Mathematics

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
21
Refs
0.09
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Visual Attention and Saliency Detection
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Image and Video Quality Assessment
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Olfactory and Sensory Function Studies
Life Sciences →  Neuroscience →  Sensory Systems
© 2026 ScienceGate Book Chapters — All rights reserved.