Ya-Qin ZhangLiejun WangJiwei Qin
In recent years, a spatio-temporal context (STC) algorithm has attracted the attention of scholars, due to the algorithm makes full use of the information of the target background. Although the STC algorithm achieve tracking at the real-time, but there is still a need to improve the tracking capability when the target is occluded or the size of the target changes. In this paper, we presented an adaptive spatio-temporal context learning for visual tracking (AFSTC). Firstly, in order to accurately describe the appearance of the target, we integrate Histogram of Oriented Gradient (HOG) and Colour-naming (CN) features. And then we use the average difference between two adjacent frames to adjust the learning rate of update model for adaptive tracking. Finally, we adjust parameters of scale update strategy to achieve the competitive results on accuracy and robustness. We perform experiments on the Online Tracking Benchmark (OTB) 2015 dataset. Our tracker achieves a 13% relative gain in distance precision compared to the traditional STC algorithm. Moreover, although the speed of our tracker reduces, but it reaches 129.99 frames per second (FPS) and can still achieve tracking at the real-time.
Seyed Mojtaba Marvasti-ZadehHossein Ghanei-YakhdanShohreh Kasaei
Kaihua ZhangLei ZhangQingshan LiuDavid ZhangMing–Hsuan Yang
Longyin WenZhaowei CaiZhen LeiYi DongStan Z. Li
Zhetao LiJie ZhangKaihua ZhangZhiyong Li