Improving Zero-Shot Phrase Grounding via Reasoning on External Knowledge and Spatial Relations

Authors

  • Zhan Shi Queen's University
  • Yilin Shen Samsung Research America
  • Hongxia Jin Samsung Research America
  • Xiaodan Zhu ECE, Queen's University

DOI:

https://doi.org/10.1609/aaai.v36i2.20123

Keywords:

Computer Vision (CV)

Abstract

Phrase grounding is a multi-modal problem that localizes a particular noun phrase in an image referred to by a text query. In the challenging zero-shot phrase grounding setting, the existing state-of-the-art grounding models have limited capacity in handling the unseen phrases. Humans, however, can ground novel types of objects in images with little effort, significantly benefiting from reasoning with commonsense. In this paper, we design a novel phrase grounding architecture that builds multi-modal knowledge graphs using external knowledge and then performs graph reasoning and spatial relation reasoning to localize the referred nouns phrases. We perform extensive experiments on different zero-shot grounding splits sub-sampled from the Flickr30K Entity and Visual Genome dataset, demonstrating that the proposed framework is orthogonal to backbone image encoders and outperforms the baselines by 2~3% in accuracy, resulting in a significant improvement under the standard evaluation metrics.

Downloads

Published

2022-06-28

How to Cite

Shi, Z., Shen, Y., Jin, H., & Zhu, X. (2022). Improving Zero-Shot Phrase Grounding via Reasoning on External Knowledge and Spatial Relations. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 2253-2261. https://doi.org/10.1609/aaai.v36i2.20123

Issue

Section

AAAI Technical Track on Computer Vision II