A Hybrid Attention Mechanism for Weakly-Supervised Temporal Action Localization

Authors

  • Ashraful Islam Rensselaer Polytechnic Institute
  • Chengjiang Long JD Digits AI Lab
  • Richard Radke Rensselaer Polytechnic Institute

DOI:

https://doi.org/10.1609/aaai.v35i2.16256

Keywords:

Video Understanding & Activity Analysis

Abstract

Weakly supervised temporal action localization is a challenging vision task due to the absence of ground-truth temporal locations of actions in the training videos. With only video-level supervision during training, most existing methods rely on a Multiple Instance Learning (MIL) framework to predict the start and end frame of each action category in a video. However, the existing MIL-based approach has a major limitation of only capturing the most discriminative frames of an action, ignoring the full extent of an activity. Moreover, these methods cannot model background activity effectively, which plays an important role in localizing foreground activities. In this paper, we present a novel framework named HAM-Net with a hybrid attention mechanism which includes temporal soft, semi-soft and hard attentions to address these issues. Our temporal soft attention module, guided by an auxiliary background class in the classification module, models the background activity by introducing an ``action-ness'' score for each video snippet. Moreover, our temporal semi-soft and hard attention modules, calculating two attention scores for each video snippet, help to focus on the less discriminative frames of an action to capture the full action boundary. Our proposed approach outperforms recent state-of-the-art methods by at least 2.2% mAP at IoU threshold 0.5 on the THUMOS14 dataset, and by at least 1.3% mAP at IoU threshold 0.75 on the ActivityNet1.2 dataset.

Downloads

Published

2021-05-18

How to Cite

Islam, A., Long, C., & Radke, R. (2021). A Hybrid Attention Mechanism for Weakly-Supervised Temporal Action Localization. Proceedings of the AAAI Conference on Artificial Intelligence, 35(2), 1637-1645. https://doi.org/10.1609/aaai.v35i2.16256

Issue

Section

AAAI Technical Track on Computer Vision I