Weakly Supervised Temporal Action Localization Through Learning Explicit Subspaces for Action and Context

Authors

  • Ziyi Liu Xi'an Jiaotong University
  • Le Wang Xi'an Jiaotong University
  • Wei Tang University of Illinois at Chicago
  • Junsong Yuan State University of New York at Buffalo
  • Nanning Zheng Xi'an Jiaotong University
  • Gang Hua Wormpex AI Research

DOI:

https://doi.org/10.1609/aaai.v35i3.16323

Keywords:

Video Understanding & Activity Analysis, Representation Learning

Abstract

Weakly-supervised Temporal Action Localization (WS-TAL) methods learn to localize temporal starts and ends of action instances in a video under only video-level supervision. Existing WS-TAL methods rely on deep features learned for action recognition. However, due to the mismatch between classification and localization, these features cannot distinguish the frequently co-occurring contextual background, i.e., the context, and the actual action instances. We term this challenge action-context confusion, and it will adversely affect the action localization accuracy. To address this challenge, we introduce a framework that learns two feature subspaces respectively for actions and their context. By explicitly accounting for action visual elements, the action instances can be localized more precisely without the distraction from the context. To facilitate the learning of these two feature subspaces with only video-level categorical labels, we leverage the predictions from both spatial and temporal streams for snippets grouping. In addition, an unsupervised learning task is introduced to make the proposed module focus on mining temporal information. The proposed approach outperforms state-of-the-art WS-TAL methods on three benchmarks, i.e., THUMOS14, ActivityNet v1.2 and v1.3 datasets.

Downloads

Published

2021-05-18

How to Cite

Liu, Z., Wang, L., Tang, W., Yuan, J., Zheng, N., & Hua, G. (2021). Weakly Supervised Temporal Action Localization Through Learning Explicit Subspaces for Action and Context. Proceedings of the AAAI Conference on Artificial Intelligence, 35(3), 2242-2250. https://doi.org/10.1609/aaai.v35i3.16323

Issue

Section

AAAI Technical Track on Computer Vision II