Boosting ASR Robustness via Test-Time Reinforcement Learning with Audio-Text Semantic Rewards

Authors

  • Linghan Fang The Hong Kong University of Science and Technology (Guangzhou) Technische Universität München
  • Tianxin Xie The Hong Kong University of Science and Technology (Guangzhou)
  • Li Liu The Hong Kong University of Science and Technology (Guangzhou)

DOI:

https://doi.org/10.1609/aaai.v40i36.40323

Abstract

Recently, Automatic Speech Recognition (ASR) systems (e.g., Whisper) have achieved remarkable accuracy improvements but remain highly sensitive to real-world unseen data (data with large distribution shifts), including noisy environments and diverse accents. To address this issue, test-time adaptation (TTA) has shown great potential in improving the model adaptability at inference time without ground-truth labels, and existing TTA methods often rely on pseudo-labeling or entropy minimization. However, by treating model confidence as a learning signal, these methods may reinforce high-confidence errors, leading to confirmation bias that undermines adaptation. To overcome these limitations, we present ASR-TRA, a novel Test-time Reinforcement Adaptation framework inspired by causal intervention. More precisely, our method introduces a learnable decoder prompt and utilizes temperature-controlled stochastic decoding to generate diverse transcription candidates. These are scored by a reward model that measures audio-text semantic alignment, and the resulting feedback is used to update both model and prompt parameters via reinforcement learning. Comprehensive experiments on LibriSpeech with synthetic noise and L2 Arctic accented English datasets demonstrate that our method significantly outperforms existing state-of-the-art (SOTA), including SUTA and SGEM, in both accuracy and inference speed. Ablation studies further confirm the effectiveness of combining audio and language-based rewards, highlighting our method's enhanced stability and interpretability. Overall, our approach provides a practical and robust solution for deploying ASR systems in challenging real-world conditions.

Downloads

Published

2026-03-14

How to Cite

Fang, L., Xie, T., & Liu, L. (2026). Boosting ASR Robustness via Test-Time Reinforcement Learning with Audio-Text Semantic Rewards. Proceedings of the AAAI Conference on Artificial Intelligence, 40(36), 30673-30681. https://doi.org/10.1609/aaai.v40i36.40323

Issue

Section

AAAI Technical Track on Natural Language Processing I