Tune-In: Training Under Negative Environments with Interference for Attention Networks Simulating Cocktail Party Effect
DOI:
https://doi.org/10.1609/aaai.v35i16.17644Keywords:
Speech & Signal Processing, Unsupervised & Self-Supervised Learning, Transfer/Adaptation/Multi-task/Meta/Automated Learning, Representation LearningAbstract
We study the cocktail party problem and propose a novel attention network called Tune-In, abbreviated for training under negative environments with interference. It firstly learns two separate spaces of speaker-knowledge and speech-stimuli based on a shared feature space, where a new block structure is designed as the building block for all spaces, and then cooperatively solves different tasks. Between the two spaces, information is cast towards each other via a novel cross- and dual-attention mechanism, mimicking the bottom-up and top-down processes of a human's cocktail party effect. It turns out that substantially discriminative and generalizable speaker representations can be learnt in severely interfered conditions via our self-supervised training. The experimental results verify this seeming paradox. The learnt speaker embedding has superior discriminative power than a standard speaker verification method; meanwhile, Tune-In achieves remarkably better speech separation performances in terms of SI-SNRi and SDRi consistently in all test modes, and especially at lower memory and computational consumption, than state-of-the-art benchmark systems.Downloads
Published
2021-05-18
How to Cite
Wang, J., Lam, M. W. Y., Su, D., & Yu, D. (2021). Tune-In: Training Under Negative Environments with Interference for Attention Networks Simulating Cocktail Party Effect. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16), 13961-13969. https://doi.org/10.1609/aaai.v35i16.17644
Issue
Section
AAAI Technical Track on Speech and Natural Language Processing III