MERL: Multimodal Event Representation Learning in Heterogeneous Embedding Spaces

Authors

  • Linhai Zhang Southeast University
  • Deyu Zhou Southeast University
  • Yulan He University of Warwick
  • Zeng Yang Southeast University

DOI:

https://doi.org/10.1609/aaai.v35i16.17695

Keywords:

Language Grounding & Multi-modal NLP

Abstract

Previous work has shown the effectiveness of using event representations for tasks such as script event prediction and stock market prediction. It is however still challenging to learn the subtle semantic differences between events based solely on textual descriptions of events often represented as (subject, predicate, object) triples. As an alternative, images offer a more intuitive way of understanding event semantics. We observe that event described in text and in images show different abstraction levels and therefore should be projected onto heterogeneous embedding spaces, as opposed to what have been done in previous approaches which project signals from different modalities onto a homogeneous space. In this paper, we propose a Multimodal Event Representation Learning framework (MERL) to learn event representations based on both text and image modalities simultaneously. Event textual triples are projected as Gaussian density embeddings by a dual-path Gaussian triple encoder, while event images are projected as point embeddings by a visual event component-aware image encoder. Moreover, a novel score function motivated by statistical hypothesis testing is introduced to coordinate two embedding spaces. Experiments are conducted on various multimodal event-related tasks and results show that MERL outperforms a number of unimodal and multimodal baselines, demonstrating the effectiveness of the proposed framework.

Downloads

Published

2021-05-18

How to Cite

Zhang, L., Zhou, D., He, Y., & Yang, Z. (2021). MERL: Multimodal Event Representation Learning in Heterogeneous Embedding Spaces. Proceedings of the AAAI Conference on Artificial Intelligence, 35(16), 14420-14427. https://doi.org/10.1609/aaai.v35i16.17695

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing III