Latent Dirichlet Allocation for Unsupervised Activity Analysis on an Autonomous Mobile Robot

Authors

  • Paul Duckworth University of Leeds
  • Muhannad Alomari University of Leeds
  • James Charles University of Leeds
  • David Hogg University of Leeds
  • Anthony Cohn University of Leeds

DOI:

https://doi.org/10.1609/aaai.v31i1.11043

Keywords:

Unsupervised Learning, Qualitative Spatio-Temporal Representations, Mobile Robotics, Plan and Activity Recognition, Latent Dirichlet Allocation

Abstract

For autonomous robots to collaborate on joint tasks with humans they require a shared understanding of an observed scene. We present a method for unsupervised learning of common human movements and activities on an autonomous mobile robot, which generalises and improves on recent results. Our framework encodes multiple qualitative abstractions of RGBD video from human observations and does not require external temporal segmentation. Analogously to information retrieval in text corpora, each human detection is modelled as a random mixture of latent topics. A generative probabilistic technique is used to recover topic distributions over an auto-generated vocabulary of discrete, qualitative spatio-temporal code words. We show that the emergent categories align well with human activities as interpreted by a human. This is a particularly challenging task on a mobile robot due to the varying camera viewpoints which lead to incomplete, partial and occluded human detections.

Downloads

Published

2017-02-12

How to Cite

Duckworth, P., Alomari, M., Charles, J., Hogg, D., & Cohn, A. (2017). Latent Dirichlet Allocation for Unsupervised Activity Analysis on an Autonomous Mobile Robot. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.11043