Multi-Rate Gated Recurrent Convolutional Networks for Video-Based Pedestrian Re-Identification

Authors

  • Zhihui Li Beijing Etrol Technologies Co., Ltd.
  • Lina Yao University of New South Wales
  • Feiping Nie Northwestern Polytechnical University
  • Dingwen Zhang Northwestern Polytechnical University
  • Min Xu University of Technology Sydney

DOI:

https://doi.org/10.1609/aaai.v32i1.12302

Keywords:

Video-based person re-id, Motion Variances, LSTM

Abstract

Matching pedestrians across multiple camera views has attracted lots of recent research attention due to its apparent importance in surveillance and security applications.While most existing works address this problem in a still-image setting, we consider the more informative and challenging video-based person re-identification problem, where a video of a pedestrian as seen in one camera needs to be matched to a gallery of videos captured by other non-overlapping cameras. We employ a convolutional network to extract the appearance and motion features from raw video sequences, and then feed them into a multi-rate recurrent network to exploit the temporal correlations, and more importantly, to take into account the fact that pedestrians, sometimes even the same pedestrian, move in different speeds across different camera views. The combined network is trained in an end-to-end fashion, and we further propose an initialization strategy via context reconstruction to largely improve the performance. We conduct extensive experiments on the iLIDS-VID and PRID-2011 datasets, and our experimental results confirm the effectiveness and the generalization ability of our model.

Downloads

Published

2018-04-27

How to Cite

Li, Z., Yao, L., Nie, F., Zhang, D., & Xu, M. (2018). Multi-Rate Gated Recurrent Convolutional Networks for Video-Based Pedestrian Re-Identification. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.12302