Don’t Be So Sure! Boosting ASR Decoding via Confidence Relaxation
DOI:
https://doi.org/10.1609/aaai.v37i11.26614Keywords:
SNLP: Other Foundations of Speech & Natural Language Processing, SNLP: Applications, SNLP: Conversational AI/Dialogue Systems, SNLP: Interpretability & Analysis of NLP Models, SNLP: Language ModelsAbstract
Automatic Speech Recognition (ASR) systems frequently use a search-based decoding strategy aiming to find the best attainable transcript by considering multiple candidates. One prominent speech recognition decoding heuristic is beam search, which seeks the transcript with the greatest likelihood computed using the predicted distribution. While showing substantial performance gains in various tasks, beam search loses some of its effectiveness when the predicted probabilities are highly confident, i.e., the predicted distribution is massed for a single or very few classes. We show that recently proposed Self-Supervised Learning (SSL)-based ASR models tend to yield exceptionally confident predictions that may hamper beam search from truly considering a diverse set of candidates. We perform a layer analysis to reveal and visualize how predictions evolve, and propose a decoding procedure that improves the performance of fine-tuned ASR models. Our proposed approach does not require further training beyond the original fine-tuning, nor additional model parameters. In fact, we find that our proposed method requires significantly less inference computation than current approaches. We propose aggregating the top M layers, potentially leveraging useful information encoded in intermediate layers, and relaxing model confidence. We demonstrate the effectiveness of our approach by conducting an empirical study on varying amounts of labeled resources and different model sizes, showing consistent improvements in particular when applied to low-resource scenarios.Downloads
Published
2023-06-26
How to Cite
Wullach, T., & Chazan, S. E. (2023). Don’t Be So Sure! Boosting ASR Decoding via Confidence Relaxation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(11), 13780-13788. https://doi.org/10.1609/aaai.v37i11.26614
Issue
Section
AAAI Technical Track on Speech & Natural Language Processing