Action Recognition From Skeleton Data via Analogical Generalization Over Qualitative Representations

Authors

  • Kezhen Chen Northwestern University
  • Kenneth Forbus Northwestern University

DOI:

https://doi.org/10.1609/aaai.v32i1.11328

Keywords:

Cognitive system, Qualitative reasoning, Action recognition

Abstract

Human action recognition remains a difficult problem for AI. Traditional machine learning techniques can have high recognition accuracy, but they are typically black boxes whose internal models are not inspectable and whose results are not explainable. This paper describes a new pipeline for recognizing human actions from skeleton data via analogical generalization. Specifically, starting with Kinect data, we segment each human action by temporal regions where the motion is qualitatively uniform, creating a sketch graph that provides a form of qualitative representation of the behavior that is easy to visualize. Models are learned from sketch graphs via analogical generalization, which are then used for classification via analogical retrieval. The retrieval process also produces links between the new example and components of the model that provide explanations. To improve recognition accuracy, we implement dynamic feature selection to pick reasonable relational features. We show the explanation advantage of our approach by example, and results on three public datasets illustrate its utility.

Downloads

Published

2018-04-25

How to Cite

Chen, K., & Forbus, K. (2018). Action Recognition From Skeleton Data via Analogical Generalization Over Qualitative Representations. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11328

Issue

Section

AAAI Technical Track: Cognitive Systems