JOURNAL ARTICLE

Online visual object tracking with supervised sparse representation and learning

Abstract

In this paper, an online visual object tracking algorithm based on the discriminative sparse representation framework with supervised learning is proposed. Different from the generative sparse representation based tracking algorithms, the proposed method casts the tracking problem into a binary classification task. A linear classifier is embedded into the sparse representation model by incorporating the classification error into the objective function to achieve discriminative classification. The dictionary and the classifier are jointly trained using the online dictionary learning algorithm, thus allow the model can adapt the dynamic variations of target appearance and background environment. The target locations are updated based on the classification score and the greedy search motion model. The proposed method is evaluated using four benchmark datasets and is compared with three state-of-the-art tracking algorithms. The results show that the discriminative sparse representation facilitates the tracking performance.

Keywords:
Discriminative model Artificial intelligence Computer science Pattern recognition (psychology) Sparse approximation Classifier (UML) Video tracking Active appearance model Eye tracking Generative model Benchmark (surveying) Binary classification Machine learning Computer vision Object (grammar) Generative grammar Support vector machine Image (mathematics)

Metrics

1
Cited By
0.24
FWCI (Field Weighted Citation Impact)
27
Refs
0.58
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Citation History

Topics

Video Surveillance and Tracking Methods
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Face and Expression Recognition
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Advanced Vision and Imaging
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
© 2026 ScienceGate Book Chapters — All rights reserved.