Multi-Rate Gated Recurrent Convolutional Networks for Video-Based Pedestrian Re-Identification
DOI:
https://doi.org/10.1609/aaai.v32i1.12302Keywords:
Video-based person re-id, Motion Variances, LSTMAbstract
Matching pedestrians across multiple camera views has attracted lots of recent research attention due to its apparent importance in surveillance and security applications.While most existing works address this problem in a still-image setting, we consider the more informative and challenging video-based person re-identification problem, where a video of a pedestrian as seen in one camera needs to be matched to a gallery of videos captured by other non-overlapping cameras. We employ a convolutional network to extract the appearance and motion features from raw video sequences, and then feed them into a multi-rate recurrent network to exploit the temporal correlations, and more importantly, to take into account the fact that pedestrians, sometimes even the same pedestrian, move in different speeds across different camera views. The combined network is trained in an end-to-end fashion, and we further propose an initialization strategy via context reconstruction to largely improve the performance. We conduct extensive experiments on the iLIDS-VID and PRID-2011 datasets, and our experimental results confirm the effectiveness and the generalization ability of our model.