Towards Zero-Shot Learning for Automatic Phonemic Transcription


  • Xinjian Li Carnegie Mellon University
  • Siddharth Dalmia Carnegie Mellon University
  • David Mortensen Carnegie Mellon University
  • Juncheng Li Carnegie Mellon University
  • Alan Black Carnegie Mellon University
  • Florian Metze Carnegie Mellon University



Automatic phonemic transcription tools are useful for low-resource language documentation. However, due to the lack of training sets, only a tiny fraction of languages have phonemic transcription tools. Fortunately, multilingual acoustic modeling provides a solution given limited audio training data. A more challenging problem is to build phonemic transcribers for languages with zero training data. The difficulty of this task is that phoneme inventories often differ between the training languages and the target language, making it infeasible to recognize unseen phonemes. In this work, we address this problem by adopting the idea of zero-shot learning. Our model is able to recognize unseen phonemes in the target language without any training data. In our model, we decompose phonemes into corresponding articulatory attributes such as vowel and consonant. Instead of predicting phonemes directly, we first predict distributions over articulatory attributes, and then compute phoneme distributions with a customized acoustic model. We evaluate our model by training it using 13 languages and testing it using 7 unseen languages. We find that it achieves 7.7% better phoneme error rate on average over a standard multilingual model.




How to Cite

Li, X., Dalmia, S., Mortensen, D., Li, J., Black, A., & Metze, F. (2020). Towards Zero-Shot Learning for Automatic Phonemic Transcription. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 8261-8268.



AAAI Technical Track: Natural Language Processing