Beyond Grounding: Extracting Fine-Grained Event Hierarchies across Modalities

Authors

  • Hammad Ayyubi Columbia University
  • Christopher Thomas Virginia Tech
  • Lovish Chum Columbia University
  • Rahul Lokesh Samsung Research America
  • Long Chen The Hong Kong University of Science and Technology
  • Yulei Niu Columbia University
  • Xudong Lin Columbia University
  • Xuande Feng Columbia University
  • Jaywon Koo Columbia University
  • Sounak Ray Columbia University
  • Shih-Fu Chang Columbia University

DOI:

https://doi.org/10.1609/aaai.v38i16.29718

Keywords:

NLP: Language Grounding & Multi-modal NLP, CV: Language and Vision, CV: Video Understanding & Activity Analysis

Abstract

Events describe happenings in our world that are of importance. Naturally, understanding events mentioned in multimedia content and how they are related forms an important way of comprehending our world. Existing literature can infer if events across textual and visual (video) domains are identical (via grounding) and thus, on the same semantic level. However, grounding fails to capture the intricate cross-event relations that exist due to the same events being referred to on many semantic levels. For example, the abstract event of "war'' manifests at a lower semantic level through subevents "tanks firing'' (in video) and airplane "shot'' (in text), leading to a hierarchical, multimodal relationship between the events. In this paper, we propose the task of extracting event hierarchies from multimodal (video and text) data to capture how the same event manifests itself in different modalities at different semantic levels. This reveals the structure of events and is critical to understanding them. To support research on this task, we introduce the Multimodal Hierarchical Events (MultiHiEve) dataset. Unlike prior video-language datasets, MultiHiEve is composed of news video-article pairs, which makes it rich in event hierarchies. We densely annotate a part of the dataset to construct the test benchmark. We show the limitations of state-of-the-art unimodal and multimodal baselines on this task. Further, we address these limitations via a new weakly supervised model, leveraging only unannotated video-article pairs from MultiHiEve. We perform a thorough evaluation of our proposed method which demonstrates improved performance on this task and highlight opportunities for future research. Data: https://github.com/hayyubi/multihieve

Published

2024-03-24

How to Cite

Ayyubi, H., Thomas, C., Chum, L., Lokesh, R., Chen, L., Niu, Y., Lin, X., Feng, X., Koo, J., Ray, S., & Chang, S.-F. (2024). Beyond Grounding: Extracting Fine-Grained Event Hierarchies across Modalities. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 17664-17672. https://doi.org/10.1609/aaai.v38i16.29718

Issue

Section

AAAI Technical Track on Natural Language Processing I