Spatial-Temporal Person Re-Identification

Authors

  • Guangcong Wang Sun Yat-sen University
  • Jianhuang Lai Sun Yat-sen University
  • Peigen Huang Sun Yat-sen University
  • Xiaohua Xie Sun Yat-sen University

DOI:

https://doi.org/10.1609/aaai.v33i01.33018933

Abstract

Most of current person re-identification (ReID) methods neglect a spatial-temporal constraint. Given a query image, conventional methods compute the feature distances between the query image and all the gallery images and return a similarity ranked table. When the gallery database is very large in practice, these approaches fail to obtain a good performance due to appearance ambiguity across different camera views. In this paper, we propose a novel two-stream spatial-temporal person ReID (st-ReID) framework that mines both visual semantic information and spatial-temporal information. To this end, a joint similarity metric with Logistic Smoothing (LS) is introduced to integrate two kinds of heterogeneous information into a unified framework. To approximate a complex spatial-temporal probability distribution, we develop a fast Histogram-Parzen (HP) method. With the help of the spatial-temporal constraint, the st-ReID model eliminates lots of irrelevant images and thus narrows the gallery database. Without bells and whistles, our st-ReID method achieves rank-1 accuracy of 98.1% on Market-1501 and 94.4% on DukeMTMC-reID, improving from the baselines 91.2% and 83.8%, respectively, outperforming all previous state-of-theart methods by a large margin.

Downloads

Published

2019-07-17

How to Cite

Wang, G., Lai, J., Huang, P., & Xie, X. (2019). Spatial-Temporal Person Re-Identification. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8933-8940. https://doi.org/10.1609/aaai.v33i01.33018933

Issue

Section

AAAI Technical Track: Vision