Target Focused Shallow Transformer Framework for Efficient Visual Tracking

Authors

  • Md Maklachur Rahman Texas A&M University, College Station, TX, USA

DOI:

https://doi.org/10.1609/aaai.v38i21.30405

Keywords:

Deep Learning, Visual Object Tracking, Single Object Tracking, Object Tracking, Computer Vision

Abstract

Template learning transformer trackers have achieved significant performance improvement recently due to the longdependency learning using the self-attention (SA) mechanism. However, the typical SA mechanisms in transformers adopt a less discriminative design approach which is inadequate for focusing on the most important target information during tracking. Therefore, existing trackers are easily distracted by background information and have constraints in handling tracking challenges. The focus of our research is to develop a target-focused discriminative shallow transformer tracking framework that can learn to distinguish the target from the background and enable accurate tracking with fast speed. Extensive experiments will be performed on several popular benchmarks, including OTB100, UAV123, GOT10k, LaSOT, and TrackingNet, to demonstrate the effectiveness of the proposed framework.

Downloads

Published

2024-03-24

How to Cite

Rahman, M. M. (2024). Target Focused Shallow Transformer Framework for Efficient Visual Tracking. Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23409-23410. https://doi.org/10.1609/aaai.v38i21.30405