Exploring Reliable Spatiotemporal Dependencies for Efficient Visual Tracking

Authors

  • Junze Shi Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences Shenyang Institute of Automation, Chinese Academy of Science University of the Chinese Academy of Sciences
  • Yang Yu Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences Shenyang Institute of Automation, Chinese Academy of Science
  • Jian Shi Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences Shenyang Institute of Automation, Chinese Academy of Science University of the Chinese Academy of Sciences
  • Haibo Luo Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences Shenyang Institute of Automation, Chinese Academy of Science

DOI:

https://doi.org/10.1609/aaai.v40i11.37853

Abstract

Recent advances in transformer-based lightweight object tracking have established new standards across benchmarks, leveraging the global receptive field and powerful feature extraction capabilities of attention mechanisms. Despite these achievements, existing methods universally employ sparse sampling during training—utilizing only one template and one search image per sequence—which fails to comprehensively explore spatiotemporal information in videos. This limitation constrains performance and causes the gap between lightweight and high-performance trackers. To bridge this divide while maintaining real-time efficiency, we propose STDTrack, a framework that pioneers the integration of reliable spatiotemporal dependencies into lightweight trackers. Our approach implements dense video sampling to maximize spatiotemporal information utilization. We introduce a temporally propagating spatiotemporal token to guide per-frame feature extraction. To ensure comprehensive target state representation, we design the Multi-frame Information Fusion Module (MFIFM), which augments current dependencies using historical context. The MFIFM operates on features stored in our constructed Spatiotemporal Token Maintainer (STM), where a quality-based update mechanism ensures information reliability. Considering the scale variation among tracking targets, we develop a multi-scale prediction head to dynamically adapt to objects of different sizes. Extensive experiments demonstrate state-of-the-art results across six benchmarks. Notably, on GOT-10k, STDTrack rivals certain high-performance non-real-time trackers (e.g., MixFormer) while operating at 192 FPS (GPU) and 41 FPS (CPU).

Downloads

Published

2026-03-14

How to Cite

Shi, J., Yu, Y., Shi, J., & Luo, H. (2026). Exploring Reliable Spatiotemporal Dependencies for Efficient Visual Tracking. Proceedings of the AAAI Conference on Artificial Intelligence, 40(11), 8978–8987. https://doi.org/10.1609/aaai.v40i11.37853

Issue

Section

AAAI Technical Track on Computer Vision VIII