Discriminative and Robust Online Learning for Siamese Visual Tracking

Authors

  • Jinghao Zhou Northwestern Polytechnical University
  • Peng Wang Northwestern Polytechnical University
  • Haoyang Sun Northwestern Polytechnical University

DOI:

https://doi.org/10.1609/aaai.v34i07.7002

Abstract

The problem of visual object tracking has traditionally been handled by variant tracking paradigms, either learning a model of the object's appearance exclusively online or matching the object with the target in an offline-trained embedding space. Despite the recent success, each method agonizes over its intrinsic constraint. The online-only approaches suffer from a lack of generalization of the model they learn thus are inferior in target regression, while the offline-only approaches (e.g., convolutional siamese trackers) lack the target-specific context information thus are not discriminative enough to handle distractors, and robust enough to deformation. Therefore, we propose an online module with an attention mechanism for offline siamese networks to extract target-specific features under L2 error. We further propose a filter update strategy adaptive to treacherous background noises for discriminative learning, and a template update strategy to handle large target deformations for robust learning. Effectiveness can be validated in the consistent improvement over three siamese baselines: SiamFC, SiamRPN++, and SiamMask. Beyond that, our model based on SiamRPN++ obtains the best results over six popular tracking benchmarks and can operate beyond real-time.

Downloads

Published

2020-04-03

How to Cite

Zhou, J., Wang, P., & Sun, H. (2020). Discriminative and Robust Online Learning for Siamese Visual Tracking. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 13017-13024. https://doi.org/10.1609/aaai.v34i07.7002

Issue

Section

AAAI Technical Track: Vision