Hybrid Neural Networks for On-Device Directional Hearing

Authors

  • Anran Wang University of Washington
  • Maruchi Kim University of Washington
  • Hao Zhang ETH Zürich
  • Shyamnath Gollakota University of Washington

DOI:

https://doi.org/10.1609/aaai.v36i10.21394

Keywords:

Speech & Natural Language Processing (SNLP)

Abstract

On-device directional hearing requires audio source separation from a given direction while achieving stringent human-imperceptible latency requirements. While neural nets can achieve significantly better performance than traditional beamformers, all existing models fall short of supporting low-latency causal inference on computationally-constrained wearables. We present DeepBeam, a hybrid model that combines traditional beamformers with a custom lightweight neural net. The former reduces the computational burden of the latter and also improves its generalizability, while the latter is designed to further reduce the memory and computational overhead to enable real-time and low-latency operations. Our evaluation shows comparable performance to state-of-the-art causal inference models on synthetic data while achieving a 5x reduction of model size, 4x reduction of computation per second, 5x reduction in processing time and generalizing better to real hardware data. Further, our real-time hybrid model runs in 8 ms on mobile CPUs designed for low-power wearable devices and achieves an end-to-end latency of 17.5 ms.

Downloads

Published

2022-06-28

How to Cite

Wang, A., Kim, M., Zhang, H., & Gollakota, S. (2022). Hybrid Neural Networks for On-Device Directional Hearing. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 11421-11430. https://doi.org/10.1609/aaai.v36i10.21394

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing