Recently, the research on the field of visual saliency estimation increases in both neuro-science and computer vision aspects. In this context, we present a new video saliency approach based on deep saliency features to highlight the most important objects in videos. Our approach investigate the usage of spatio-temporal object candidates for saliency in video data. We extract features using deep convolutional neural network CNN, which recently has a large success in the field of visual recognition. We extract deep features from each video, we train a Random forest and assign saliency to each object candidate. Overall, our deep saliency approach demonstrate a successful performance when we evaluate it on two data sets Fukuchi and SegTrack v2 on both ROC and PR curves and in terms of F-score.
Jiazhong ChenJie ChenYuan DongDakai RenShiqi ZhangZongyi Li
ChaeEun WooSumin LeeS. ParkByung Hyung Kim
Jun WangChang TianLei HuHai WangZeng Ming-yongQing Shen
Trung-Nghia LeAkihiro Sugimoto
Souad ChaabouniJenny Benois‐PineauAkka ZemmariChokri Ben Amar