USE: A Unified Model for Universal Sound Separation and Extraction

Authors

  • Hongyu Wang Shanghai Jiao Tong University VUI Labs
  • Chenda Li Shanghai Jiao Tong University VUI Labs
  • Xin Zhou Shanghai Jiao Tong University VUI Labs
  • Shuai Wang Nanjing University Shenzhen Loop Area Institute
  • Yanmin Qian Shanghai Jiao Tong University VUI Labs

DOI:

https://doi.org/10.1609/aaai.v40i39.40635

Abstract

Sound separation (SS) and target sound extraction (TSE) are fundamental techniques for addressing complex acoustic scenarios. While existing SS methods struggle with determining the unknown number of sound sources, TSE approaches require precisely specified clues to achieve optimal performance. This paper proposes a unified framework that synergistically combines SS and TSE to overcome their individual limitations. Our architecture employs two complementary components: 1) An Encoder-Decoder Attractor (EDA) network that automatically infers both the source count and corresponding acoustic clues for SS, and 2) A multi-modal fusion network that precisely interprets diverse user-provided clues (acoustic, semantic, or visual) for TSE. Through joint training with cross-task consistency constraints, we establish a unified latent space that bridges both paradigms. During inference, the system adaptively operates in either fully autonomous SS mode or clue-driven TSE mode. Experiments demonstrate remarkable performance in both tasks, with notable improvements of 1.4 dB SDR improvement in SS compared to baseline and 86% TSE accuracy.

Published

2026-03-14

How to Cite

Wang, H., Li, C., Zhou, X., Wang, S., & Qian, Y. (2026). USE: A Unified Model for Universal Sound Separation and Extraction. Proceedings of the AAAI Conference on Artificial Intelligence, 40(39), 33476–33484. https://doi.org/10.1609/aaai.v40i39.40635

Issue

Section

AAAI Technical Track on Natural Language Processing IV