Visual Relation Detection using Hybrid Analogical Learning

Authors

  • Kezhen Chen Northwestern University, Evanston, IL
  • Ken Forbus Northwestern University, Evanston, IL

Keywords:

Analogy

Abstract

Visual Relation Detection is currently one of the most popular problems for visual understanding. Many deep-learning models are designed for relation detection on images and have achieved impressive results. However, deep-learning models have several serious problems, including poor training-efficiency and lack of understandability. Psychologists have ample evidence that analogy is central in human learning and reasoning, including visual reasoning. This paper introduces a new hybrid system for visual relation detection combining deep-learning models and analogical generalization. Object bounding boxes and masks are detected using deep-learning models and analogical generalization over qualitative representations is used for visual relation detection between object pairs. Experiments on the Visual Relation Detection dataset indicates that our hybrid system gets comparable results on the task and is more training-efficient and explainable than pure deep-learning models.

Downloads

Published

2021-05-18

How to Cite

Chen, K., & Forbus, K. (2021). Visual Relation Detection using Hybrid Analogical Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(1), 801-808. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16162

Issue

Section

AAAI Technical Track on Cognitive Modeling and Cognitive Systems