Exploiting the Social-Like Prior in Transformer for Visual Reasoning

Authors

  • Yudong Han School of Software, Shandong University
  • Yupeng Hu School of Software, Shandong University
  • Xuemeng Song School of Computer Science and Technology, Shandong University
  • Haoyu Tang School of Software, Shandong University
  • Mingzhu Xu School of Software, Shandong University
  • Liqiang Nie School of Computer Science and Technology, Harbin Institute of Technology (Shenzhen)

DOI:

https://doi.org/10.1609/aaai.v38i3.27977

Keywords:

CV: Language and Vision, CV: Visual Reasoning & Symbolic Representations

Abstract

Benefiting from instrumental global dependency modeling of self-attention (SA), transformer-based approaches have become the pivotal choices for numerous downstream visual reasoning tasks, such as visual question answering (VQA) and referring expression comprehension (REC). However, some studies have recently suggested that SA tends to suffer from rank collapse thereby inevitably leads to representation degradation as the transformer layer goes deeper. Inspired by social network theory, we attempt to make an analogy between social behavior and regional information interaction in SA, and harness two crucial notions of structural hole and degree centrality in social network to explore the possible optimization towards SA learning, which naturally deduces two plug-and-play social-like modules. Based on structural hole, the former module allows to make information interaction in SA more structured, which effectively avoids redundant information aggregation and global feature homogenization for better rank remedy, followed by latter module to comprehensively characterize and refine the representation discrimination via considering degree centrality of regions and transitivity of relations. Without bells and whistles, our model outperforms a bunch of baselines by a noticeable margin when considering our social-like prior on five benchmarks in VQA and REC tasks, and a series of explanatory results are showcased to sufficiently reveal the social-like behaviors in SA.

Published

2024-03-24

How to Cite

Han, Y., Hu, Y., Song, X., Tang, H., Xu, M., & Nie, L. (2024). Exploiting the Social-Like Prior in Transformer for Visual Reasoning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(3), 2058-2066. https://doi.org/10.1609/aaai.v38i3.27977

Issue

Section

AAAI Technical Track on Computer Vision II