Look, Listen and Learn — A Multimodal LSTM for Speaker Identification

Authors

  • Jimmy Ren SenseTime Group Limited
  • Yongtao Hu The University of Hong Kong
  • Yu-Wing Tai SenseTime Group Limited
  • Chuan Wang The University of Hong Kong
  • Li Xu SenseTime Group Limited
  • Wenxiu Sun SenseTime Group Limited
  • Qiong Yan SenseTime Group Limited

DOI:

https://doi.org/10.1609/aaai.v30i1.10471

Abstract

Speaker identification refers to the task of localizing the face of a person who has the same identity as the ongoing voice in a video. This task not only requires collective perception over both visual and auditory signals, the robustness to handle severe quality degradations and unconstrained content variations are also indispensable. In this paper, we describe a novel multimodal Long Short-Term Memory (LSTM) architecture which seamlessly unifies both visual and auditory modalities from the beginning of each sequence input. The key idea is to extend the conventional LSTM by not only sharing weights across time steps, but also sharing weights across modalities. We show that modeling the temporal dependency across face and voice can significantly improve the robustness to content quality degradations and variations. We also found that our multimodal LSTM is robustness to distractors, namely the non-speaking identities. We applied our multimodal LSTM to The Big Bang Theory dataset and showed that our system outperforms the state-of-the-art systems in speaker identification with lower false alarm rate and higher recognition accuracy.

Downloads

Published

2016-03-05

How to Cite

Ren, J., Hu, Y., Tai, Y.-W., Wang, C., Xu, L., Sun, W., & Yan, Q. (2016). Look, Listen and Learn — A Multimodal LSTM for Speaker Identification. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10471