Exploiting Audio-Visual Consistency with Partial Supervision for Spatial Audio Generation

Authors

  • Yan-Bo Lin Graduate Inst. Communication Engineering, National Taiwan University, Taiwan
  • Yu-Chiang Frank Wang Graduate Inst. Communication Engineering, National Taiwan University, Taiwan ASUS Intelligent Cloud Services, Taiwan

Keywords:

Applications, Multi-modal Vision

Abstract

Human perceives rich auditory experience with distinct sound heard by ears. Videos recorded with binaural audio particular simulate how human receives ambient sound. However, a large number of videos are with monaural audio only, which would degrade the user experience due to the lack of ambient information. To address this issue, we propose an audio spatialization framework to convert a monaural video into a binaural one exploiting the relationship across audio and visual components. By preserving the left-right consistency in both audio and visual modalities, our learning strategy can be viewed as a self-supervised learning technique, and alleviates the dependency on a large amount of video data with ground truth binaural audio data during training. Experiments on benchmark datasets confirm the effectiveness of our proposed framework in both semi-supervised and fully supervised scenarios, with ablation studies and visualization further support the use of our model for audio spatialization.

Downloads

Published

2021-05-18

How to Cite

Lin, Y.-B., & Wang, Y.-C. F. (2021). Exploiting Audio-Visual Consistency with Partial Supervision for Spatial Audio Generation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(3), 2056-2063. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16302

Issue

Section

AAAI Technical Track on Computer Vision II