Explicit Visual Prompts for Visual Object Tracking

Authors

  • Liangtao Shi Key Laboratory of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University Guangxi Key Lab of Multi-Source Information Mining & Security, Guangxi Normal University
  • Bineng Zhong Key Laboratory of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University Guangxi Key Lab of Multi-Source Information Mining & Security, Guangxi Normal University
  • Qihua Liang Key Laboratory of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University Guangxi Key Lab of Multi-Source Information Mining & Security, Guangxi Normal University
  • Ning Li Key Laboratory of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University Guangxi Key Lab of Multi-Source Information Mining & Security, Guangxi Normal University
  • Shengping Zhang Harbin Institute of Technology
  • Xianxian Li Key Laboratory of Education Blockchain and Intelligent Technology, Ministry of Education, Guangxi Normal University Guangxi Key Lab of Multi-Source Information Mining & Security, Guangxi Normal University

DOI:

https://doi.org/10.1609/aaai.v38i5.28286

Keywords:

CV: Motion & Tracking

Abstract

How to effectively exploit spatio-temporal information is crucial to capture target appearance changes in visual tracking. However, most deep learning-based trackers mainly focus on designing a complicated appearance model or template updating strategy, while lacking the exploitation of context between consecutive frames and thus entailing the when-and-how-to-update dilemma. To address these issues, we propose a novel explicit visual prompts framework for visual tracking, dubbed EVPTrack. Specifically, we utilize spatio-temporal tokens to propagate information between consecutive frames without focusing on updating templates. As a result, we cannot only alleviate the challenge of when-to-update, but also avoid the hyper-parameters associated with updating strategies. Then, we utilize the spatio-temporal tokens to generate explicit visual prompts that facilitate inference in the current frame. The prompts are fed into a transformer encoder together with the image tokens without additional processing. Consequently, the efficiency of our model is improved by avoiding how-to-update. In addition, we consider multi-scale information as explicit visual prompts, providing multiscale template features to enhance the EVPTrack's ability to handle target scale changes. Extensive experimental results on six benchmarks (i.e., LaSOT, LaSOText, GOT-10k, UAV123, TrackingNet, and TNL2K.) validate that our EVPTrack can achieve competitive performance at a real-time speed by effectively exploiting both spatio-temporal and multi-scale information. Code and models are available at https://github.com/GXNU-ZhongLab/EVPTrack.

Published

2024-03-24

How to Cite

Shi, L., Zhong, B., Liang, Q., Li, N., Zhang, S., & Li, X. (2024). Explicit Visual Prompts for Visual Object Tracking. Proceedings of the AAAI Conference on Artificial Intelligence, 38(5), 4838–4846. https://doi.org/10.1609/aaai.v38i5.28286

Issue

Section

AAAI Technical Track on Computer Vision IV