SAP: Self-Adaptive Proposal Model for Temporal Action Detection Based on Reinforcement Learning

Authors

  • Jingjia Huang Peking University
  • Nannan Li Peking University
  • Tao Zhang Peking University
  • Ge Li Peking University
  • Tiejun Huang Peking University
  • Wen Gao Peking University

Keywords:

Computer vision, Action detection, Reinforcement learning

Abstract

Existing action detection algorithms usually generate action proposals through an extensive search over the video at multiple temporal scales, which brings about huge computational overhead and deviates from the human perception procedure. We argue that the process of detecting actions should be naturally one of observation and refinement: observe the current window and refine the span of attended window to cover true action regions. In this paper, we propose a Self-Adaptive Proposal (SAP) model that learns to find actions through continuously adjusting the temporal bounds in a self-adaptive way. The whole process can be deemed as an agent, which is firstly placed at the beginning of the video and traverse the whole video by adopting a sequence of transformations on the current attended region to discover actions according to a learned policy. We utilize reinforcement learning, especially the Deep Q-learning algorithm to learn the agent’s decision policy. In addition, we use temporal pooling operation to extract more effective feature representation for the long temporal window, and design a regression network to adjust the position offsets between predicted results and the ground truth. Experiment results on THUMOS’14 validate the effectiveness of SAP, which can achieve competitive performance with current action detection algorithms via much fewer proposals.

Downloads

Published

2018-04-27

How to Cite

Huang, J., Li, N., Zhang, T., Li, G., Huang, T., & Gao, W. (2018). SAP: Self-Adaptive Proposal Model for Temporal Action Detection Based on Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/12229