Generalizing Multiple Object Tracking to Unseen Domains by Introducing Natural Language Representation
DOI:
https://doi.org/10.1609/aaai.v37i3.25437Keywords:
CV: Motion & Tracking, CV: Language and Vision, CV: Multi-modal VisionAbstract
Although existing multi-object tracking (MOT) algorithms have obtained competitive performance on various benchmarks, almost all of them train and validate models on the same domain. The domain generalization problem of MOT is hardly studied. To bridge this gap, we first draw the observation that the high-level information contained in natural language is domain invariant to different tracking domains. Based on this observation, we propose to introduce natural language representation into visual MOT models for boosting the domain generalization ability. However, it is infeasible to label every tracking target with a textual description. To tackle this problem, we design two modules, namely visual context prompting (VCP) and visual-language mixing (VLM). Specifically, VCP generates visual prompts based on the input frames. VLM joints the information in the generated visual prompts and the textual prompts from a pre-defined Trackbook to obtain instance-level pseudo textual description, which is domain invariant to different tracking scenes. Through training models on MOT17 and validating them on MOT20, we observe that the pseudo textual descriptions generated by our proposed modules improve the generalization performance of query-based trackers by large margins.Downloads
Published
2023-06-26
How to Cite
Yu, E., Liu, S., Li, Z., Yang, J., Li, Z., Han, S., & Tao, W. (2023). Generalizing Multiple Object Tracking to Unseen Domains by Introducing Natural Language Representation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(3), 3304-3312. https://doi.org/10.1609/aaai.v37i3.25437
Issue
Section
AAAI Technical Track on Computer Vision III