Exploring Modality-Aware Fusion and Decoupled Temporal Propagation for Multi-Modal Object Tracking

Authors

  • Shilei Wang School of Automation, Northwestern Polytechnical University
  • Pujian Lai School of Automation, Northwestern Polytechnical University
  • Dong Gao School of Automation, Northwestern Polytechnical University
  • Jifeng Ning College of Information Engineering, Northwest A\&F University
  • Gong Cheng School of Automation, Northwestern Polytechnical University

DOI:

https://doi.org/10.1609/aaai.v40i12.37973

Abstract

Most existing multi-modal trackers adopt uniform fusion strategies, overlooking the inherent differences between modalities. Moreover, they propagate temporal information through mixed tokens, leading to entangled and less discriminative temporal representations. To address these limitations, we propose MDTrack, a novel framework for modality-aware fusion and decoupled temporal propagation in multi-modal object tracking. Specifically, for modality-aware fusion, we allocate dedicated experts to each modality (Infrared, Event, Depth, and RGB) to process their respective representations. The gating mechanism within the Mixture of Experts (MoE) then dynamically selects the optimal experts based on the input features, enabling adaptive and modality-specific fusion. For decoupled temporal propagation, we introduce two separate State Space Model (SSM) structures to independently store and update the hidden states h of the RGB and X-modal streams, effectively capturing their distinct temporal information. To ensure synergy between the two temporal representations, we incorporate a set of cross-attentions between the input features of the two SSMs, facilitating implicit information exchange. The resulting temporally enriched features are then integrated into the backbone via another set of cross-attention, enhancing MDTrack’s ability to leverage temporal information. Extensive experiments demonstrate the effectiveness of our proposed method. Both MDTrack-S (Modality-Specific Training) and MDTrack-U (Unified-Modality Training) achieve state-of-the-art performance across five multi-modal tracking benchmarks.

Downloads

Published

2026-03-14

How to Cite

Wang, S., Lai, P., Gao, D., Ning, J., & Cheng, G. (2026). Exploring Modality-Aware Fusion and Decoupled Temporal Propagation for Multi-Modal Object Tracking. Proceedings of the AAAI Conference on Artificial Intelligence, 40(12), 10065–10073. https://doi.org/10.1609/aaai.v40i12.37973

Issue

Section

AAAI Technical Track on Computer Vision IX