Lipper: Synthesizing Thy Speech Using Multi-View Lipreading

Authors

  • Yaman Kumar Adobe
  • Rohit Jain Netaji Subhas Institute of Technology
  • Khwaja Mohd. Salik Netaji Subhas Institute of Technology
  • Rajiv Ratn Shah Indraprastha Institute of Information Technology, Delhi
  • Yifang Yin National University of Singapore
  • Roger Zimmermann National University of Singapore

DOI:

https://doi.org/10.1609/aaai.v33i01.33012588

Abstract

Lipreading has a lot of potential applications such as in the domain of surveillance and video conferencing. Despite this, most of the work in building lipreading systems has been limited to classifying silent videos into classes representing text phrases. However, there are multiple problems associated with making lipreading a text-based classification task like its dependence on a particular language and vocabulary mapping. Thus, in this paper we propose a multi-view lipreading to audio system, namely Lipper, which models it as a regression task. The model takes silent videos as input and produces speech as the output. With multi-view silent videos, we observe an improvement over single-view speech reconstruction results. We show this by presenting an exhaustive set of experiments for speaker-dependent, out-of-vocabulary and speaker-independent settings. Further, we compare the delay values of Lipper with other speechreading systems in order to show the real-time nature of audio produced. We also perform a user study for the audios produced in order to understand the level of comprehensibility of audios produced using Lipper.

Downloads

Published

2019-07-17

How to Cite

Kumar, Y., Jain, R., Salik, K. M., Shah, R. R., Yin, Y., & Zimmermann, R. (2019). Lipper: Synthesizing Thy Speech Using Multi-View Lipreading. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 2588-2595. https://doi.org/10.1609/aaai.v33i01.33012588

Issue

Section

AAAI Technical Track: Humans and AI