JOURNAL ARTICLE

Multi-Level Model for Video Saliency Detection

Abstract

This paper proposes a fast detection model for video salient objects based on recurrent network architecture. Firstly, a multi-level attention (MLA) module is designed, which integrates multi-level feature maps in a cascaded manner. It effectively extracts the semantic information and detailed information of the intra-frame. These spatial features are input into a deeper bidirectional ConvLSTM to learn temporal dependence. Secondly, the result of the forward flow output is used as a backward input, and deeper temporal dependence is extracted. Finally, we present a spatial-temporal fused bidirectional ConvLSTM framework, which reduces the accumulated memory in the bidirectional ConvLSTM by exploiting element level fusion strategy. The experimental results show that the proposed method achieves the best detection precision on the two challenging benchmarks: ViSal and FBMS datasets, with a real-time speed of 23 fps.

Keywords:
Computer science Frame (networking) Artificial intelligence Salient Feature (linguistics) Pattern recognition (psychology) Computer vision Feature extraction

Metrics

8
Cited By
0.53
FWCI (Field Weighted Citation Impact)
32
Refs
0.69
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Visual Attention and Saliency Detection
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Image and Video Quality Assessment
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Image and Video Retrieval Techniques
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition

Related Documents

JOURNAL ARTICLE

Video based Saliency Detection

AlokThakur AlokThakurNiraj Tiwari

Journal:   International Journal of Computer Applications Year: 2014 Vol: 92 (14)Pages: 8-12
BOOK-CHAPTER

Predictive Video Saliency Detection

Qian LiShifeng ChenBeiwei Zhang

Communications in computer and information science Year: 2012 Pages: 178-185
© 2026 ScienceGate Book Chapters — All rights reserved.