A Case Study of the Shortcut Effects in Visual Commonsense Reasoning

Authors

  • Keren Ye University of Pittsburgh
  • Adriana Kovashka University of Pittsburgh

DOI:

https://doi.org/10.1609/aaai.v35i4.16428

Keywords:

Language and Vision

Abstract

Visual reasoning and question-answering have gathered attention in recent years. Many datasets and evaluation protocols have been proposed; some have been shown to contain bias that allows models to ``cheat'' without performing true, generalizable reasoning. A well-known bias is dependence on language priors (frequency of answers) resulting in the model not looking at the image. We discover a new type of bias in the Visual Commonsense Reasoning (VCR) dataset. In particular we show that most state-of-the-art models exploit co-occurring text between input (question) and output (answer options), and rely on only a few pieces of information in the candidate options, to make a decision. Unfortunately, relying on such superficial evidence causes models to be very fragile. To measure fragility, we propose two ways to modify the validation data, in which a few words in the answer choices are modified without significant changes in meaning. We find such insignificant changes cause models' performance to degrade significantly. To resolve the issue, we propose a curriculum-based masking approach, as a mechanism to perform more robust training. Our method improves the baseline by requiring it to pay attention to the answers as a whole, and is more effective than prior masking strategies.

Downloads

Published

2021-05-18

How to Cite

Ye, K., & Kovashka, A. (2021). A Case Study of the Shortcut Effects in Visual Commonsense Reasoning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(4), 3181-3189. https://doi.org/10.1609/aaai.v35i4.16428

Issue

Section

AAAI Technical Track on Computer Vision III