Research of Event Reconstruct Based on Multi-View Contrastive Learning (Student Abstract)

Authors

  • Yuefeng Ma Qufu Normal University
  • Zhongchao He Qufu Normal University
  • Shumei Wang Qufu Normal University

DOI:

https://doi.org/10.1609/aaai.v38i21.30478

Keywords:

Representation Learning, Contrastive Learning, Multi-view

Abstract

The proliferation of social media exacerbates information fragmentation, posing challenges to understanding public events. We address the problem of event reconstruction with a novel Multi-view Contrast Event Reconstruction (MCER) model. MCER maximizes feature dissimilarity between different views of the same event using contrastive learning, while minimizing mutual information between distinct events. This aggregates fragmented views to reconstruct comprehensive event representations. MCER employs momentum and weight-sharing encoders in a three-tower architecture with supervised contrastive loss for multi-view representation learning. Due to the scarcity of multi-view public datasets, we construct a new Mul-view-data benchmark.Experiments demonstrate MCER’s superior performance on public data and our Mul-view-data, significantly outperforming selfsupervised methods by incorporating supervised contrastive techniques. MCER advances multi-view representation learning to counter information fragmentation and enable robust event understanding.

Downloads

Published

2024-03-24

How to Cite

Ma, Y., He, Z., & Wang, S. (2024). Research of Event Reconstruct Based on Multi-View Contrastive Learning (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23571-23572. https://doi.org/10.1609/aaai.v38i21.30478