Temporal Context Enhanced Feature Aggregation for Video Object Detection

Authors

  • Fei He CRISE, CASIA
  • Naiyu Gao CRISE, CASIA
  • Qiaozhe Li CRISE, CASIA
  • Senyao Du Horizon Robotics
  • Xin Zhao CRISE, CASIA
  • Kaiqi Huang CRISE, CASIA

DOI:

https://doi.org/10.1609/aaai.v34i07.6727

Abstract

Video object detection is a challenging task because of the presence of appearance deterioration in certain video frames. One typical solution is to aggregate neighboring features to enhance per-frame appearance features. However, such a method ignores the temporal relations between the aggregated frames, which is critical for improving video recognition accuracy. To handle the appearance deterioration problem, this paper proposes a temporal context enhanced network (TCENet) to exploit temporal context information by temporal aggregation for video object detection. To handle the displacement of the objects in videos, a novel DeformAlign module is proposed to align the spatial features from frame to frame. Instead of adopting a fixed-length window fusion strategy, a temporal stride predictor is proposed to adaptively select video frames for aggregation, which facilitates exploiting variable temporal information and requiring fewer video frames for aggregation to achieve better results. Our TCENet achieves state-of-the-art performance on the ImageNet VID dataset and has a faster runtime. Without bells-and-whistles, our TCENet achieves 80.3% mAP by only aggregating 3 frames.

Downloads

Published

2020-04-03

How to Cite

He, F., Gao, N., Li, Q., Du, S., Zhao, X., & Huang, K. (2020). Temporal Context Enhanced Feature Aggregation for Video Object Detection. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 10941-10948. https://doi.org/10.1609/aaai.v34i07.6727

Issue

Section

AAAI Technical Track: Vision