DCAN: Improving Temporal Action Detection via Dual Context Aggregation

Authors

  • Guo Chen Nanjing University
  • Yin-Dong Zheng Nanjing University, China
  • Limin Wang Nanjing University
  • Tong Lu Nanjing University

DOI:

https://doi.org/10.1609/aaai.v36i1.19900

Keywords:

Computer Vision (CV)

Abstract

Temporal action detection aims to locate the boundaries of action in the video. The current method based on boundary matching enumerates and calculates all possible boundary matchings to generate proposals. However, these methods neglect the long-range context aggregation in boundary prediction. At the same time, due to the similar semantics of adjacent matchings, local semantic aggregation of densely-generated matchings cannot improve semantic richness and discrimination. In this paper, we propose the end-to-end proposal generation method named Dual Context Aggregation Network (DCAN) to aggregate context on two levels, namely, boundary level and proposal level, for generating high-quality action proposals, thereby improving the performance of temporal action detection. Specifically, we design the Multi-Path Temporal Context Aggregation (MTCA) to achieve smooth context aggregation on boundary level and precise evaluation of boundaries. For matching evaluation, Coarse-to-fine Matching (CFM) is designed to aggregate context on the proposal level and refine the matching map from coarse to fine. We conduct extensive experiments on ActivityNet v1.3 and THUMOS-14. DCAN obtains an average mAP of 35.39% on ActivityNet v1.3 and reaches mAP 54.14% at IoU@0.5 on THUMOS-14, which demonstrates DCAN can generate high-quality proposals and achieve state-of-the-art performance. We release the code at https://github.com/cg1177/DCAN.

Downloads

Published

2022-06-28

How to Cite

Chen, G., Zheng, Y.-D., Wang, L., & Lu, T. (2022). DCAN: Improving Temporal Action Detection via Dual Context Aggregation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 248-257. https://doi.org/10.1609/aaai.v36i1.19900

Issue

Section

AAAI Technical Track on Computer Vision I