Visual Hallucination Elevates Speech Recognition

Authors

  • Fang Zhang School of Computer Science and Technology, University of Science and Technology of China State Key Laboratory of Cognitive Intelligence
  • Yongxin Zhu School of Computer Science and Technology, University of Science and Technology of China State Key Laboratory of Cognitive Intelligence
  • Xiangxiang Wang Tencent YouTu Lab
  • Huang Chen Tencent YouTu Lab
  • Xing Sun Tencent YouTu Lab
  • Linli Xu School of Computer Science and Technology, University of Science and Technology of China State Key Laboratory of Cognitive Intelligence

DOI:

https://doi.org/10.1609/aaai.v38i17.29926

Keywords:

NLP: Speech, NLP: Generation

Abstract

Due to the detrimental impact of noise on the conventional audio speech recognition (ASR) task, audio-visual speech recognition~(AVSR) has been proposed by incorporating both audio and visual video signals. Although existing methods have demonstrated that the aligned visual input of lip movements can enhance the robustness of AVSR systems against noise, the paired videos are not always available during inference, leading to the problem of the missing visual modality, which restricts their practicality in real-world scenarios. To tackle this problem, we propose a Discrete Feature based Visual Generative Model (DFVGM) which exploits semantic correspondences between the audio and visual modalities during training, generating visual hallucinations in lieu of real videos during inference. To achieve that, the primary challenge is to generate the visual hallucination given the noisy audio while preserving semantic correspondences with the clean speech. To tackle this challenge, we start with training the audio encoder in the Audio-Only (AO) setting, which generates continuous semantic features closely associated with the linguistic information. Simultaneously, the visual encoder is trained in the Visual-Only (VO) setting, producing visual features that are phonetically related. Next, we employ K-means to discretize the continuous audio and visual feature spaces. The discretization step allows DFVGM to capture high-level semantic structures that are more resilient to noise and generate visual hallucinations with high quality. To evaluate the effectiveness and robustness of our approach, we conduct extensive experiments on two publicly available datasets. The results demonstrate that our method achieves a remarkable 53% relative reduction (30.5%->12.9%) in Word Error Rate (WER) on average compared to the current state-of-the-art Audio-Only (AO) baselines while maintaining comparable results (< 5% difference) under the Audio-Visual (AV) setting even without video as input.

Published

2024-03-24

How to Cite

Zhang, F., Zhu, Y., Wang, X., Chen, H., Sun, X., & Xu, L. (2024). Visual Hallucination Elevates Speech Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 19542-19550. https://doi.org/10.1609/aaai.v38i17.29926

Issue

Section

AAAI Technical Track on Natural Language Processing II