Deconfounded Visual Grounding

Authors

  • Jianqiang Huang Nanyang Technological University Damo Academy, Alibaba Group
  • Yu Qin Damo Academy, Alibaba Group
  • Jiaxin Qi Nanyang Technological University
  • Qianru Sun Singapore Management University
  • Hanwang Zhang Nanyang Technological University

DOI:

https://doi.org/10.1609/aaai.v36i1.19983

Keywords:

Computer Vision (CV)

Abstract

We focus on the confounding bias between language and location in the visual grounding pipeline, where we find that the bias is the major visual reasoning bottleneck. For example, the grounding process is usually a trivial languagelocation association without visual reasoning, e.g., grounding any language query containing sheep to the nearly central regions, due to that most queries about sheep have ground-truth locations at the image center. First, we frame the visual grounding pipeline into a causal graph, which shows the causalities among image, query, target location and underlying confounder. Through the causal graph, we know how to break the grounding bottleneck: deconfounded visual grounding. Second, to tackle the challenge that the confounder is unobserved in general, we propose a confounder-agnostic approach called: Referring Expression Deconfounder (RED), to remove the confounding bias. Third, we implement RED as a simple language attention, which can be applied in any grounding method. On popular benchmarks, RED improves various state-of-the-art grounding methods by a significant margin. Code is available at: https://github.com/JianqiangH/Deconfounded_VG.

Downloads

Published

2022-06-28

How to Cite

Huang, J., Qin, Y., Qi, J., Sun, Q., & Zhang, H. (2022). Deconfounded Visual Grounding. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 998-1006. https://doi.org/10.1609/aaai.v36i1.19983

Issue

Section

AAAI Technical Track on Computer Vision I