Multi-attention Recurrent Network for Human Communication Comprehension

Authors

  • Amir Zadeh Carnegie Mellon University
  • Paul Pu Liang Carnegie Mellon University
  • Soujanya Poria Nanyang Technological University
  • Prateek Vij Nanyang Technological University
  • Erik Cambria Nanyang Technological University
  • Louis-Philippe Morency Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v32i1.12024

Keywords:

Multimodal Machine Learning, Attention Networks, Attention Modeling, Natural Language Processing

Abstract

Human face-to-face communication is a complex multimodal signal. We use words (language modality), gestures (vision modality) and changes in tone (acoustic modality) to convey our intentions. Humans easily process and understand face-to-face communication, however, comprehending this form of communication remains a significant challenge for Artificial Intelligence (AI). AI must understand each modality and the interactions between them that shape the communication. In this paper, we present a novel neural architecture for understanding human communication called the Multi-attention Recurrent Network (MARN). The main strength of our model comes from discovering interactions between modalities through time using a neural component called the Multi-attention Block (MAB) and storing them in the hybrid memory of a recurrent component called the Long-short Term Hybrid Memory (LSTHM). We perform extensive comparisons on six publicly available datasets for multimodal sentiment analysis, speaker trait recognition and emotion recognition. MARN shows state- of-the-art results performance in all the datasets.

Downloads

Published

2018-04-27

How to Cite

Zadeh, A., Liang, P. P., Poria, S., Vij, P., Cambria, E., & Morency, L.-P. (2018). Multi-attention Recurrent Network for Human Communication Comprehension. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12024