Joint Multimodal Entity-Relation Extraction Based on Edge-Enhanced Graph Alignment Network and Word-Pair Relation Tagging

Authors

  • Li Yuan School of Software Engineering, South China University of Technology, Guangzhou, China Key Laboratory of Big Data and Intelligent Robot (SCUT), MOE of China
  • Yi Cai School of Software Engineering, South China University of Technology, Guangzhou, China Key Laboratory of Big Data and Intelligent Robot (SCUT), MOE of China The Peng Cheng Laboratory, Shenzhen, China
  • Jin Wang School of Information Science and Engineering, Yunnan University, Yunnan, P.R. China
  • Qing Li Department of Computing, The Hong Kong Polytechnic University, Hong Kong, China

DOI:

https://doi.org/10.1609/aaai.v37i9.26309

Keywords:

ML: Multimodal Learning, SNLP: Information Extraction, SNLP: Speech and Multimodality, SNLP: Sentiment Analysis and Stylistic Analysis, CV: Multi-modal Vision

Abstract

Multimodal named entity recognition (MNER) and multimodal relation extraction (MRE) are two fundamental subtasks in the multimodal knowledge graph construction task. However, the existing methods usually handle two tasks independently, which ignores the bidirectional interaction between them. This paper is the first to propose jointly performing MNER and MRE as a joint multimodal entity-relation extraction (JMERE) task . Besides, the current MNER and MRE models only consider aligning the visual objects with textual entities in visual and textual graphs but ignore the entity-entity relationships and object-object relationships. To address the above challenges, we propose an edge-enhanced graph alignment network and a word-pair relation tagging (EEGA) for the JMERE task. Specifically, we first design a word-pair relation tagging to exploit the bidirectional interaction between MNER and MRE and avoid error propagation. Then, we propose an edge-enhanced graph alignment network to enhance the JMERE task by aligning nodes and edges in the cross-graph. Compared with previous methods, the proposed method can leverage the edge information to auxiliary alignment between objects and entities and find the correlations between entity-entity relationships and object-object relationships. Experiments are conducted to show the effectiveness of our model.

Downloads

Published

2023-06-26

How to Cite

Yuan, L., Cai, Y., Wang, J., & Li, Q. (2023). Joint Multimodal Entity-Relation Extraction Based on Edge-Enhanced Graph Alignment Network and Word-Pair Relation Tagging. Proceedings of the AAAI Conference on Artificial Intelligence, 37(9), 11051-11059. https://doi.org/10.1609/aaai.v37i9.26309

Issue

Section

AAAI Technical Track on Machine Learning IV