Independency Adversarial Learning for Cross-Modal Sound Separation

Authors

  • Zhenkai Lin University of Electronic Science and Technology of China
  • Yanli Ji University of Electronic Science and Technology of China
  • Yang Yang University of Electronic Science and Technology of China

DOI:

https://doi.org/10.1609/aaai.v38i4.28140

Keywords:

CV: Multi-modal Vision, ML: Unsupervised & Self-Supervised Learning

Abstract

The sound mixture separation is still challenging due to heavy sound overlapping and disturbance from noise. Unsupervised separation would significantly increase the difficulty. As sound overlapping always hinders accurate sound separation, we propose an Independency Adversarial Learning based Cross-Modal Sound Separation (IAL-CMS) approach, where IAL employs adversarial learning to minimize the correlation of separated sound elements, exploring high sound independence; CMS performs cross-modal sound separation, incorporating audio-visual consistent feature learning and interactive cross-attention learning to emphasize the semantic consistency among cross-modal features. Both audio-visual consistency and audio consistency are kept to guarantee accurate separation. The consistency and sound independence ensure the decomposition of overlapping mixtures into unrelated and distinguishable sound elements. The proposed approach is evaluated on MUSIC, VGGSound, and AudioSet. Extensive experiments certify that our approach outperforms existing approaches in supervised and unsupervised scenarios.

Downloads

Published

2024-03-24

How to Cite

Lin, Z., Ji, Y., & Yang, Y. (2024). Independency Adversarial Learning for Cross-Modal Sound Separation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(4), 3522-3530. https://doi.org/10.1609/aaai.v38i4.28140

Issue

Section

AAAI Technical Track on Computer Vision III