Recognizing Actions in Motion Trajectories Using Deep Neural Networks

Authors

  • Kunwar Singh Georgia Institute of Technology
  • Nicholas Davis Georgia Institute of Technology
  • Chih-Pin Hsiao Georgia Institute of Technology
  • Mikhail Jacob Georgia Institute of Technology
  • Krunal Patel Georgia Institute of Technology
  • Brian Magerko Georgia Institute of Technology

DOI:

https://doi.org/10.1609/aiide.v12i1.12881

Keywords:

convolutional neural network, pretend play, motion trajectory, deep learning

Abstract

This paper reports on the progress of a co-creative pretend play agent designed to interact with users by recognizing and responding to playful actions in a 2D virtual environment. In particular, we describe the design and evaluation of a classifier that recognizes 2D motion trajectories from the user’s actions. The performance of the classifier is evaluated using a publicly available dataset of labeled actions highly relevant to the domain of pretend play. We show that deep convolutional neural networks perform significantly better in recognizing these actions than previously employed methods. We also describe the plan for implementing a virtual play environment using the classifier in which the users and agent can collaboratively construct narratives during improvisational pretend play.

Downloads

Published

2021-06-25

How to Cite

Singh, K., Davis, N., Hsiao, C.-P., Jacob, M., Patel, K., & Magerko, B. (2021). Recognizing Actions in Motion Trajectories Using Deep Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 12(1), 211-217. https://doi.org/10.1609/aiide.v12i1.12881