Hierarchical Cross-Modality Semantic Correlation Learning Model for Multimodal Summarization

Authors

  • Litian Zhang Beihang University
  • Xiaoming Zhang Beihang University
  • Junshu Pan Beihang University

DOI:

https://doi.org/10.1609/aaai.v36i10.21422

Keywords:

Speech & Natural Language Processing (SNLP)

Abstract

Multimodal summarization with multimodal output (MSMO) generates a summary with both textual and visual content. Multimodal news report contains heterogeneous contents, which makes MSMO nontrivial. Moreover, it is observed that different modalities of data in the news report correlate hierarchically. Traditional MSMO methods indistinguishably handle different modalities of data by learning a representation for the whole data, which is not directly adaptable to the heterogeneous contents and hierarchical correlation. In this paper, we propose a hierarchical cross-modality semantic correlation learning model (HCSCL) to learn the intra- and inter-modal correlation existing in the multimodal data. HCSCL adopts a graph network to encode the intra-modal correlation. Then, a hierarchical fusion framework is proposed to learn the hierarchical correlation between text and images. Furthermore, we construct a new dataset with relevant image annotation and image object label information to provide the supervision information for the learning procedure. Extensive experiments on the dataset show that HCSCL significantly outperforms the baseline methods in automatic summarization metrics and fine-grained diversity tests.

Downloads

Published

2022-06-28

How to Cite

Zhang, L., Zhang, X., & Pan, J. (2022). Hierarchical Cross-Modality Semantic Correlation Learning Model for Multimodal Summarization. Proceedings of the AAAI Conference on Artificial Intelligence, 36(10), 11676-11684. https://doi.org/10.1609/aaai.v36i10.21422

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing