Rethinking the Two-Stage Framework for Grounded Situation Recognition

Authors

  • Meng Wei National University of Singapore
  • Long Chen Columbia University
  • Wei Ji National University of Singapore
  • Xiaoyu Yue Centre for Perceptual and Interactive Intelligence
  • Tat-Seng Chua National University of Singapore

DOI:

https://doi.org/10.1609/aaai.v36i3.20167

Keywords:

Computer Vision (CV)

Abstract

Grounded Situation Recognition (GSR), i.e., recognizing the salient activity (or verb) category in an image (e.g.,buying) and detecting all corresponding semantic roles (e.g.,agent and goods), is an essential step towards “human-like” event understanding. Since each verb is associated with a specific set of semantic roles, all existing GSR methods resort to a two-stage framework: predicting the verb in the first stage and detecting the semantic roles in the second stage. However, there are obvious drawbacks in both stages: 1) The widely-used cross-entropy (XE) loss for object recognition is insufficient in verb classification due to the large intra-class variation and high inter-class similarity among daily activities. 2) All semantic roles are detected in an autoregressive manner, which fails to model the complex semantic relations between different roles. To this end, we propose a novel SituFormerfor GSR which consists of a Coarse-to-Fine Verb Model (CFVM) and a Transformer-based Noun Model (TNM). CFVM is a two-step verb prediction model: a coarse-grained model trained with XE loss first proposes a set of verb candidates, and then a fine-grained model trained with triplet loss re-ranks these candidates with enhanced verb features (not only separable but also discriminative). TNM is a transformer-based semantic role detection model, which detects all roles parallelly. Owing to the global relation modeling ability and flexibility of the transformer decoder, TNM can fully explore the statistical dependency of the roles. Extensive validations on the challenging SWiG benchmark show that SituFormer achieves a new state-of-the-art performance with significant gains under various metrics. Code is available at https://github.com/kellyiss/SituFormer.

Downloads

Published

2022-06-28

How to Cite

Wei, M., Chen, L., Ji, W., Yue, X., & Chua, T.-S. (2022). Rethinking the Two-Stage Framework for Grounded Situation Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 36(3), 2651-2658. https://doi.org/10.1609/aaai.v36i3.20167

Issue

Section

AAAI Technical Track on Computer Vision III