Rethinking Flow and Diffusion Bridge Models for Speech Enhancement

Authors

  • Dahan Wang Key Laboratory of Modern Acoustics, Institute of Acoustics, Nanjing University, Nanjing, China NJU-Horizon Intelligent Audio Lab, Horizon Robotics, Beijing, China
  • Jun Gao Key Laboratory of Modern Acoustics, Institute of Acoustics, Nanjing University, Nanjing, China NJU-Horizon Intelligent Audio Lab, Horizon Robotics, Beijing, China
  • Tong Lei Tencent AI Lab, Shenzhen, China
  • Yuxiang Hu NJU-Horizon Intelligent Audio Lab, Horizon Robotics, Beijing, China
  • Changbao Zhu NJU-Horizon Intelligent Audio Lab, Horizon Robotics, Beijing, China
  • Kai Chen Key Laboratory of Modern Acoustics, Institute of Acoustics, Nanjing University, Nanjing, China NJU-Horizon Intelligent Audio Lab, Horizon Robotics, Beijing, China
  • Jing Lu Key Laboratory of Modern Acoustics, Institute of Acoustics, Nanjing University, Nanjing, China NJU-Horizon Intelligent Audio Lab, Horizon Robotics, Beijing, China

DOI:

https://doi.org/10.1609/aaai.v40i39.40630

Abstract

Flow matching and diffusion bridge models have emerged as leading paradigms in generative speech enhancement, modeling stochastic processes between paired noisy and clean speech signals based on principles such as flow matching, score matching, and Schrödinger bridge. In this paper, we present a framework that unifies existing flow and diffusion bridge models by interpreting them as constructions of Gaussian probability paths with varying means and variances between paired data. Furthermore, we investigate the underlying consistency between the training/inference procedures of these generative models and conventional predictive models. Our analysis reveals that each sampling step of a well-trained flow or diffusion bridge model optimized with a data prediction loss is theoretically analogous to executing predictive speech enhancement. Motivated by this insight, we introduce an enhanced bridge model that integrates an effective probability path design with key elements from predictive paradigms, including improved network architecture, tailored loss functions, and optimized training strategies. Experiments on denoising and dereverberation tasks demonstrate that the proposed method outperforms existing flow and diffusion baselines with fewer parameters and reduced computational complexity. The results also highlight that the inherently predictive nature of this generative framework imposes limitations on its achievable upper-bound performance.

Published

2026-03-14

How to Cite

Wang, D., Gao, J., Lei, T., Hu, Y., Zhu, C., Chen, K., & Lu, J. (2026). Rethinking Flow and Diffusion Bridge Models for Speech Enhancement. Proceedings of the AAAI Conference on Artificial Intelligence, 40(39), 33431–33439. https://doi.org/10.1609/aaai.v40i39.40630

Issue

Section

AAAI Technical Track on Natural Language Processing IV