VASR: Visual Analogies of Situation Recognition

Authors

  • Yonatan Bitton The Hebrew University of Jerusalem
  • Ron Yosef The Hebrew University of Jerusalem
  • Eliyahu Strugo The Hebrew University of Jerusalem
  • Dafna Shahaf The Hebrew University of Jerusalem
  • Roy Schwartz The Hebrew University of Jerusalem
  • Gabriel Stanovsky The Hebrew University of Jerusalem

DOI:

https://doi.org/10.1609/aaai.v37i1.25096

Keywords:

CV: Scene Analysis & Understanding, CV: Language and Vision

Abstract

A core process in human cognition is analogical mapping: the ability to identify a similar relational structure between different situations. We introduce a novel task, Visual Analogies of Situation Recognition, adapting the classical word-analogy task into the visual domain. Given a triplet of images, the task is to select an image candidate B' that completes the analogy (A to A' is like B to what?). Unlike previous work on visual analogy that focused on simple image transformations, we tackle complex analogies requiring understanding of scenes. We leverage situation recognition annotations and the CLIP model to generate a large set of 500k candidate analogies. Crowdsourced annotations for a sample of the data indicate that humans agree with the dataset label ~80% of the time (chance level 25%). Furthermore, we use human annotations to create a gold-standard dataset of 3,820 validated analogies. Our experiments demonstrate that state-of-the-art models do well when distractors are chosen randomly (~86%), but struggle with carefully chosen distractors (~53%, compared to 90% human accuracy). We hope our dataset will encourage the development of new analogy-making models. Website: https://vasr-dataset.github.io/

Downloads

Published

2023-06-26

How to Cite

Bitton, Y., Yosef, R., Strugo, E., Shahaf, D., Schwartz, R., & Stanovsky, G. (2023). VASR: Visual Analogies of Situation Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 37(1), 241-249. https://doi.org/10.1609/aaai.v37i1.25096

Issue

Section

AAAI Technical Track on Computer Vision I