Decompose the Sounds and Pixels, Recompose the Events

Authors

  • Varshanth R. Rao Huawei Noah's Ark Lab
  • Md Ibrahim Khalil Huawei Noah's Ark Lab University of Waterloo
  • Haoda Li Huawei Noah's Ark Lab University of Toronto
  • Peng Dai Huawei Noah's Ark Lab
  • Juwei Lu Huawei Noah's Ark Lab

DOI:

https://doi.org/10.1609/aaai.v36i2.20111

Keywords:

Computer Vision (CV), Machine Learning (ML)

Abstract

In this paper, we propose a framework centering around a novel architecture called the Event Decomposition Recomposition Network (EDRNet) to tackle the Audio-Visual Event (AVE) localization problem in the supervised and weakly supervised settings. AVEs in the real world exhibit common unraveling patterns (termed as Event Progress Checkpoints(EPC)), which humans can perceive through the cooperation of their auditory and visual senses. Unlike earlier methods which attempt to recognize entire event sequences, the EDRNet models EPCs and inter-EPC relationships using stacked temporal convolutions. Based on the postulation that EPC representations are theoretically consistent for an event category, we introduce the State Machine Based Video Fusion, a novel augmentation technique that blends source videos using different EPC template sequences. Additionally, we design a new loss function called the Land-Shore-Sea loss to compactify continuous foreground and background representations. Lastly, to alleviate the issue of confusing events during weak supervision, we propose a prediction stabilization method called Bag to Instance Label Correction. Experiments on the AVE dataset show that our collective framework outperforms the state-of-the-art by a sizable margin.

Downloads

Published

2022-06-28

How to Cite

Rao, V. R., Khalil, M. I., Li, H., Dai, P., & Lu, J. (2022). Decompose the Sounds and Pixels, Recompose the Events. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 2144-2152. https://doi.org/10.1609/aaai.v36i2.20111

Issue

Section

AAAI Technical Track on Computer Vision II