STA: Spatial-Temporal Attention for Large-Scale Video-Based Person Re-Identification

Authors

  • Yang Fu University of Illinois, Urbana Champaign
  • Xiaoyang Wang Nokia Bell Labs
  • Yunchao Wei University of Illinois, Urbana Champaign
  • Thomas Huang University of Illinois, Urbana Champaign

DOI:

https://doi.org/10.1609/aaai.v33i01.33018287

Abstract

In this work, we propose a novel Spatial-Temporal Attention (STA) approach to tackle the large-scale person reidentification task in videos. Different from the most existing methods, which simply compute representations of video clips using frame-level aggregation (e.g. average pooling), the proposed STA adopts a more effective way for producing robust clip-level feature representation. Concretely, our STA fully exploits those discriminative parts of one target person in both spatial and temporal dimensions, which results in a 2-D attention score matrix via inter-frame regularization to measure the importances of spatial parts across different frames. Thus, a more robust clip-level feature representation can be generated according to a weighted sum operation guided by the mined 2-D attention score matrix. In this way, the challenging cases for video-based person re-identification such as pose variation and partial occlusion can be well tackled by the STA. We conduct extensive experiments on two large-scale benchmarks, i.e. MARS and DukeMTMCVideoReID. In particular, the mAP reaches 87.7% on MARS, which significantly outperforms the state-of-the-arts with a large margin of more than 11.6%.

Downloads

Published

2019-07-17

How to Cite

Fu, Y., Wang, X., Wei, Y., & Huang, T. (2019). STA: Spatial-Temporal Attention for Large-Scale Video-Based Person Re-Identification. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8287-8294. https://doi.org/10.1609/aaai.v33i01.33018287

Issue

Section

AAAI Technical Track: Vision