Speech-Aware Long Context Pruning and Integration for Contextualized Automatic Speech Recognition

Authors

  • Yiming Rong Institute of Automation, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Yixin Zhang Institute of Automation, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Ziyi Wang Institute of Automation, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Deyang Jiang Institute of Automation, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Yunlong Zhao Institute of Automation, Chinese Academy of Sciences University of Chinese Academy of Sciences
  • Haoran Wu Institute of Automation, Chinese Academy of Sciences
  • Shiyu Zhou Institute of Automation, Chinese Academy of Sciences
  • Bo Xu Institute of Automation, Chinese Academy of Sciences University of Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v40i39.40563

Abstract

Automatic speech recognition (ASR) systems have achieved remarkable performance in common conditions but often struggle to leverage long-context information in contextualized scenarios that require domain-specific knowledge, such as conference presentations. This challenge arises primarily due to constrained model context windows and the sparsity of relevant information within extensive contextual noise. To solve this, we propose the SAP^2 method, a novel framework that dynamically prunes and integrates relevant contextual keywords in two stages. Specifically, each stage leverages our proposed Speech-Driven Attention-based Pooling mechanism, enabling efficient compression of context embeddings while preserving speech-salient information. Experimental results demonstrate state-of-the-art performance of SAP^2 on the SlideSpeech and LibriSpeech datasets, achieving word error rates (WER) of 7.71% and 1.12%, respectively. On SlideSpeech, our method notably reduces biased keyword error rates (B-WER) by 41.1% compared to non-contextual baselines. SAP^2 also exhibits robust scalability, consistently maintaining performance under extensive contextual input conditions on both datasets.

Published

2026-03-14

How to Cite

Rong, Y., Zhang, Y., Wang, Z., Jiang, D., Zhao, Y., Wu, H., … Xu, B. (2026). Speech-Aware Long Context Pruning and Integration for Contextualized Automatic Speech Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 40(39), 32835–32842. https://doi.org/10.1609/aaai.v40i39.40563

Issue

Section

AAAI Technical Track on Natural Language Processing IV