Investigating Inner Properties of Multimodal Representation and Semantic Compositionality With Brain-Based Componential Semantics

Authors

  • Shaonan Wang Institute of Automation, Chinese Academy of Sciences
  • Jiajun Zhang Institute of Automation, Chinese Academy of Sciences
  • Nan Lin Institute of Psychology, Chinese Academy of Sciences
  • Chengqing Zong Institute of Automation, Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v32i1.12032

Keywords:

Multimodal model, brain-based componential semantics, word representation, image representation

Abstract

Multimodal models have been proven to outperform text-based approaches on learning semantic representations. However, it still remains unclear what properties are encoded in multimodal representations, in what aspects do they outperform the single-modality representations, and what happened in the process of semantic compositionality in different input modalities. Considering that multimodal models are originally motivated by human concept representations, we assume that correlating multimodal representations with brain-based semantics would interpret their inner properties to answer the above questions. To that end, we propose simple interpretation methods based on brain-based componential semantics. First we investigate the inner properties of multimodal representations by correlating them with corresponding brain-based property vectors. Then we map the distributed vector space to the interpretable brain-based componential space to explore the inner properties of semantic compositionality. Ultimately, the present paper sheds light on the fundamental questions of natural language understanding, such as how to represent the meaning of words and how to combine word meanings into larger units.

Downloads

Published

2018-04-26

How to Cite

Wang, S., Zhang, J., Lin, N., & Zong, C. (2018). Investigating Inner Properties of Multimodal Representation and Semantic Compositionality With Brain-Based Componential Semantics. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12032