This paper proposes a robust visual tracking approach based on saliency selection. In this method, salient patches and their spatial context inside the object region are exploited for object representation and appearance modeling. Tracking is then implemented by a hybrid stochastic and deterministic mechanism, which needs a small number of samples for particle filtering and escapes local minimum in conventional deterministic tracking. As time progresses, the selected salient patches and their spatial context are updated online to adapt the appearance model to both object and environmental changes. We carry out experiments on several challenging sequences and compare our method with the state-of-the-art algorithm to show its improvement in terms of tracking performance.
Caglar AvytekinFrancesco CricriEmre Aksu
Sugang MaZhixian ZhaoLei PuZhiqiang HouLei ZhangXiangmo Zhao
Linshan LiuSuiwu ZhengHong Qiao