Model Uncertainty Guides Visual Object Tracking
DOI:
https://doi.org/10.1609/aaai.v35i4.16473Keywords:
Motion & TrackingAbstract
Model object trackers largely rely on the online learning of a discriminative classifier from potentially diverse sample frames. However, noisy or insufficient amounts of samples can deteriorate the classifiers' performance and cause tracking drift. Furthermore, alterations such as occlusion and blurring can cause the target to be lost. In this paper, we make several improvements aimed at tackling uncertainty and improving robustness in object tracking. Our first and most important contribution is to propose a sampling method for the online learning of object trackers based on uncertainty adjustment: our method effectively selects representative sample frames to feed the discriminative branch of the tracker, while filtering out noise samples. Furthermore, to improve the robustness of the tracker to various challenging scenarios, we propose a novel data augmentation procedure, together with a specific improved backbone architecture. All our improvements fit together in one model, which we refer to as the Uncertainty Adjusted Tracker (UATracker), and can be trained in a joint and end-to-end fashion. Experiments on the LaSOT, UAV123, OTB100 and VOT2018 benchmarks demonstrate that our UATracker outperforms state-of-the-art real-time trackers by significant margins.Downloads
Published
2021-05-18
How to Cite
Zhou, L., Ledent, A., Hu, Q., Liu, T., Zhang, J., & Kloft, M. (2021). Model Uncertainty Guides Visual Object Tracking. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 3581-3589. https://doi.org/10.1609/aaai.v35i4.16473
Issue
Section
AAAI Technical Track on Computer Vision III