Learning Attribute-Specific Representations for Visual Tracking


  • Yuankai Qi Harbin Institute of Technology
  • Shengping Zhang Harbin Institute of Technology
  • Weigang Zhang Harbin Institute of Technology, Weihai
  • Li Su University of Chinese Academy of Sciences
  • Qingming Huang University of Chinese Academy of Sciences
  • Ming-Hsuan Yang University of California, Merced




In recent years, convolutional neural networks (CNNs) have achieved great success in visual tracking. Most of existing methods train or fine-tune a binary classifier to distinguish the target from its background. However, they may suffer from the performance degradation due to insufficient training data. In this paper, we show that attribute information (e.g., illumination changes, occlusion and motion) in the context facilitates training an effective classifier for visual tracking. In particular, we design an attribute-based CNN with multiple branches, where each branch is responsible for classifying the target under a specific attribute. Such a design reduces the appearance diversity of the target under each attribute and thus requires less data to train the model. We combine all attributespecific features via ensemble layers to obtain more discriminative representations for the final target/background classification. The proposed method achieves favorable performance on the OTB100 dataset compared to state-of-the-art tracking methods. After being trained on the VOT datasets, the proposed network also shows a good generalization ability on the UAV-Traffic dataset, which has significantly different attributes and target appearances with the VOT datasets.




How to Cite

Qi, Y., Zhang, S., Zhang, W., Su, L., Huang, Q., & Yang, M.-H. (2019). Learning Attribute-Specific Representations for Visual Tracking. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8835-8842. https://doi.org/10.1609/aaai.v33i01.33018835



AAAI Technical Track: Vision