Adversarial Meta Sampling for Multilingual Low-Resource Speech Recognition


  • Yubei Xiao Sun Yat-sen University, China
  • Ke Gong Dark Matter AI Research
  • Pan Zhou Salesforce
  • Guolin Zheng Sun Yat-sen University, China
  • Xiaodan Liang Sun Yat-sen University, China Dark Matter AI Research
  • Liang Lin Sun Yat-sen University, China Dark Matter AI Research


Machine Translation & Multilinguality, Speech & Signal Processing, Transfer/Adaptation/Multi-task/Meta/Automated Learning


Low-resource automatic speech recognition (ASR) is challenging, as the low-resource target language data cannot well train an ASR model. To solve this issue, meta-learning formulates ASR for each source language into many small ASR tasks and meta-learns a model initialization on all tasks from different source languages to access fast adaptation on unseen target languages. However, for different source languages, the quantity and difficulty vary greatly because of their different data scales and diverse phonological systems, which leads to task-quantity and task-difficulty imbalance issues and thus a failure of multilingual meta-learning ASR (MML-ASR). In this work, we solve this problem by developing a novel adversarial meta sampling (AMS) approach to improve MML-ASR. When sampling tasks in MML-ASR, AMS adaptively determines the task sampling probability for each source language. Specifically, for each source language, if the query loss is large, it means that its tasks are not well sampled to train ASR model in terms of its quantity and difficulty and thus should be sampled more frequently for extra learning. Inspired by this fact, we feed the historical task query loss of all source language domain into a network to learn a task sampling policy for adversarially increasing the current query loss of MML-ASR. Thus, the learnt task sampling policy can master the learning situation of each language and thus predicts good task sampling probability for each language for more effective learning. Finally, experiment results on two multilingual datasets show significant performance improvement when applying our AMS on MML-ASR, and also demonstrate the applicability of AMS to other low-resource speech tasks and transfer learning ASR approaches.




How to Cite

Xiao, Y., Gong, K., Zhou, P., Zheng, G., Liang, X., & Lin, L. (2021). Adversarial Meta Sampling for Multilingual Low-Resource Speech Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16), 14112-14120. Retrieved from



AAAI Technical Track on Speech and Natural Language Processing III