Cross-Modal Attention Network for Temporal Inconsistent Audio-Visual Event Localization

Authors

  • Hanyu Xuan Nanjing University of Science and Technology
  • Zhenyu Zhang Nanjing University of Science and Technology
  • Shuo Chen Nanjing University of Science and Technology
  • Jian Yang Nanjing University of Science and Technology
  • Yan Yan Nanjing University of Science and Technology

DOI:

https://doi.org/10.1609/aaai.v34i01.5361

Abstract

In human multi-modality perception systems, the benefits of integrating auditory and visual information are extensive as they provide plenty supplementary cues for understanding the events. Despite some recent methods proposed for such application, they cannot deal with practical conditions with temporal inconsistency. Inspired by human system which puts different focuses at specific locations, time segments and media while performing multi-modality perception, we provide an attention-based method to simulate such process. Similar to human mechanism, our network can adaptively select “where” to attend, “when” to attend and “which” to attend for audio-visual event localization. In this way, even with large temporal inconsistent between vision and audio, our network is able to adaptively trade information between different modalities and successfully achieve event localization. Our method achieves state-of-the-art performance on AVE (Audio-Visual Event) dataset collected in the real life. In addition, we also systemically investigate audio-visual event localization tasks. The visualization results also help us better understand how our model works.

Downloads

Published

2020-04-03

How to Cite

Xuan, H., Zhang, Z., Chen, S., Yang, J., & Yan, Y. (2020). Cross-Modal Attention Network for Temporal Inconsistent Audio-Visual Event Localization. Proceedings of the AAAI Conference on Artificial Intelligence, 34(01), 279-286. https://doi.org/10.1609/aaai.v34i01.5361

Issue

Section

AAAI Technical Track: AI and the Web