Human-Like Sketch Object Recognition via Analogical Learning

Authors

  • Kezhen Chen Northwestern University
  • Irina Rabkina Northwestern University
  • Matthew D. McLure Northwestern University
  • Kenneth D. Forbus Northwestern University

DOI:

https://doi.org/10.1609/aaai.v33i01.33011336

Abstract

Deep learning systems can perform well on some image recognition tasks. However, they have serious limitations, including requiring far more training data than humans do and being fooled by adversarial examples. By contrast, analogical learning over relational representations tends to be far more data-efficient, requiring only human-like amounts of training data. This paper introduces an approach that combines automatically constructed qualitative visual representations with analogical learning to tackle a hard computer vision problem, object recognition from sketches. Results from the MNIST dataset and a novel dataset, the Coloring Book Objects dataset, are provided. Comparison to existing approaches indicates that analogical generalization can be used to identify sketched objects from these datasets with several orders of magnitude fewer examples than deep learning systems require.

Downloads

Published

2019-07-17

How to Cite

Chen, K., Rabkina, I., McLure, M. D., & Forbus, K. D. (2019). Human-Like Sketch Object Recognition via Analogical Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 1336-1343. https://doi.org/10.1609/aaai.v33i01.33011336

Issue

Section

AAAI Technical Track: Cognitive Systems