Object tracking is becoming a key ingredient in analysis of video imagery. For efficient and robust object tracking, visual prior of generic real world images are transferred for tracking the objects. The real world images are learned offline in an over-complete dictionary. The VOC2010 and CalTech101 data sets containing large variety of objects are used for learning visual prior. For visual tracking of online objects the learned visual prior is transferred for object representation using 11/12 Sparse coding and multi-scale max pooling. With the object representation, the tracking task is formulated within the Bayesian inference framework with the use of Sparse prototypes. In order to reduce tracking drift, we present a method that takes occlusion and motion blur into account rather than simply include image observations for model update.
Dong WangHuchuan LuMing–Hsuan Yang
Jia YanXi ChenDexiang DengQiuping Zhu
Tianxiang BaiYoufu LiZhanpeng Shao
Qing WangChen FengShuicheng YanWenli XuMing–Hsuan Yang
Gang-Joon YoonHyeong Jae HwangSang Min Yoon