Title Learning Latent Subevents in Activity Videos Using Temporal Attention Filters

Authors

  • A. Piergiovanni Indiana University
  • Chenyou Fan Indiana University
  • Michael Ryoo Indiana University

DOI:

https://doi.org/10.1609/aaai.v31i1.11240

Abstract

In this paper, we newly introduce the concept of temporal attention filters, and describe how they can be used for human activity recognition from videos. Many high-level activities are often composed of multiple temporal parts (e.g., sub-events) with different duration/speed, and our objective is to make the model explicitly learn such temporal structure using multiple attention filters and benefit from them. Our temporal filters are designed to be fully differentiable, allowing end-of-end training of the temporal filters together with the underlying frame-based or segment-based convolutional neural network architectures. This paper presents an approach of learning a set of optimal static temporal attention filters to be shared across different videos, and extends this approach to dynamically adjust attention filters per testing video using recurrent long short-term memory networks (LSTMs). This allows our temporal attention filters to learn latent sub-events specific to each activity. We experimentally confirm that the proposed concept of temporal attention filters benefits the activity recognition, and we visualize the learned latent sub-events.

Downloads

Published

2017-02-12

How to Cite

Piergiovanni, A., Fan, C., & Ryoo, M. (2017). Title Learning Latent Subevents in Activity Videos Using Temporal Attention Filters. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.11240