ACGNet: Action Complement Graph Network for Weakly-Supervised Temporal Action Localization

Authors

  • Zichen Yang Beihang University, China
  • Jie Qin Nanjing University of Aeronautics and Astronautics, China
  • Di Huang Beihang University, China

DOI:

https://doi.org/10.1609/aaai.v36i3.20216

Keywords:

Computer Vision (CV)

Abstract

Weakly-supervised temporal action localization (WTAL) in untrimmed videos has emerged as a practical but challenging task since only video-level labels are available. Existing approaches typically leverage off-the-shelf segment-level features, which suffer from spatial incompleteness and temporal incoherence, thus limiting their performance. In this paper, we tackle this problem from a new perspective by enhancing segment-level representations with a simple yet effective graph convolutional network, namely action complement graph network (ACGNet). It facilitates the current video segment to perceive spatial-temporal dependencies from others that potentially convey complementary clues, implicitly mitigating the negative effects caused by the two issues above. By this means, the segment-level features are more discriminative and robust to spatial-temporal variations, contributing to higher localization accuracies. More importantly, the proposed ACGNet works as a universal module that can be flexibly plugged into different WTAL frameworks, while maintaining the end-to-end training fashion. Extensive experiments are conducted on the THUMOS'14 and ActivityNet1.2 benchmarks, where the state-of-the-art results clearly demonstrate the superiority of the proposed approach.

Downloads

Published

2022-06-28

How to Cite

Yang, Z., Qin, J., & Huang, D. (2022). ACGNet: Action Complement Graph Network for Weakly-Supervised Temporal Action Localization. Proceedings of the AAAI Conference on Artificial Intelligence, 36(3), 3090-3098. https://doi.org/10.1609/aaai.v36i3.20216

Issue

Section

AAAI Technical Track on Computer Vision III