Transcribing Natural Languages for the Deaf via Neural Editing Programs
Keywords:AI For Social Impact (AISI Track Papers Only), Humans And AI (HAI), Computer Vision (CV)
AbstractThis work studies the task of glossification, of which the aim is to em transcribe natural spoken language sentences for the Deaf (hard-of-hearing) community to ordered sign language glosses. Previous sequence-to-sequence language models trained with paired sentence-gloss data often fail to capture the rich connections between the two distinct languages, leading to unsatisfactory transcriptions. We observe that despite different grammars, glosses effectively simplify sentences for the ease of deaf communication, while sharing a large portion of vocabulary with sentences. This has motivated us to implement glossification by executing a collection of editing actions, e.g. word addition, deletion, and copying, called editing programs, on their natural spoken language counterparts. Specifically, we design a new neural agent that learns to synthesize and execute editing programs, conditioned on sentence contexts and partial editing results. The agent is trained to imitate minimal editing programs, while exploring more widely the program space via policy gradients to optimize sequence-wise transcription quality. Results show that our approach outperforms previous glossification models by a large margin, improving the BLEU-4 score from 16.45 to 18.89 on RWTH-PHOENIX-WEATHER-2014T and from 18.38 to 21.30 on CSL-Daily.
How to Cite
Li, D., Xu, C., Liu, L., Zhong, Y., Wang, R., Petersson, L., & Li, H. (2022). Transcribing Natural Languages for the Deaf via Neural Editing Programs. Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 11991-11999. https://doi.org/10.1609/aaai.v36i11.21457
AAAI Special Track on AI for Social Impact