Correlation filters and deep learning methods are the two mainly directions in the research of visual tracking. However, these trackers do not balance accuracy and speed very well at the same time. The application of the Siamese networks brings great improvement in accuracy and speed, and an increasing number of researchers are paying attention to this aspect. In the paper, based on the Siamese networks model, we propose a robust adaptive learning visual tracking algorithm. HOG features, CN features and deep convolution features are extracted from the template frame and search region frame respectively, and we analyze the merits of each feature and perform feature adaptive fusion to improve the validity of feature representation. Then, we update the two branch models with two learning change factors and realize a more similar match to locate the target. Besides, we propose a model update strategy that employs the average peak-to-correlation energy (APCE) to determinate whether to update the learning change factors to improve the accuracy of tracking model and reduce the tracking drift in the case of tracking failure, deformation or background blur etc. Extensive experiments on the benchmark datasets (OTB-50, OTB-100) demonstrate that our visual tracking algorithm performs better than several state-of-the-art trackers for accuracy and robustness.
Wancheng ZhangYongzhao DuZhi ChenJianhua DengPeizhong Liu
Yifei ZhouJing LiBo DuJun ChangZhiquan DingTianqi Qin
Jinghao ZhouPeng WangHaoyang Sun
Fan WuTingfa XuJie GuoBo HuangChang XuJihui WangXiangmin Li