Bi-directional Adapter for Multimodal Tracking

Authors

  • Bing Cao Tianjin University
  • Junliang Guo Tianjin University
  • Pengfei Zhu Tianjin University
  • Qinghua Hu Tianjin University

DOI:

https://doi.org/10.1609/aaai.v38i2.27852

Keywords:

CV: Motion & Tracking, CV: Multi-modal Vision

Abstract

Due to the rapid development of computer vision, single-modal (RGB) object tracking has made significant progress in recent years. Considering the limitation of single imaging sensor, multi-modal images (RGB, infrared, etc.) are introduced to compensate for this deficiency for all-weather object tracking in complex environments. However, as acquiring sufficient multi-modal tracking data is hard while the dominant modality changes with the open environment, most existing techniques fail to extract multi-modal complementary information dynamically, yielding unsatisfactory tracking performance. To handle this problem, we propose a novel multi-modal visual prompt tracking model based on a universal bi-directional adapter, cross-prompting multiple modalities mutually. Our model consists of a universal bi-directional adapter and multiple modality-specific transformer encoder branches with sharing parameters. The encoders extract features of each modality separately by using a frozen, pre-trained foundation model. We develop a simple but effective light feature adapter to transfer modality-specific information from one modality to another, performing visual feature prompt fusion in an adaptive manner. With adding fewer (0.32M) trainable parameters, our model achieves superior tracking performance in comparison with both the full fine-tuning methods and the prompt learning-based methods. Our code is available: https://github.com/SparkTempest/BAT.

Published

2024-03-24

How to Cite

Cao, B., Guo, J., Zhu, P., & Hu, Q. (2024). Bi-directional Adapter for Multimodal Tracking. Proceedings of the AAAI Conference on Artificial Intelligence, 38(2), 927-935. https://doi.org/10.1609/aaai.v38i2.27852

Issue

Section

AAAI Technical Track on Computer Vision I