Unsupervised Learning of Multi-Level Descriptors for Person Re-Identification

Authors

  • Yang Yang Institute of Automation, Chinese Academy of Sciences (CASIA)
  • Longyin Wen State University of New York at Albany
  • Siwei Lyu State University of New York at Albany
  • Stan Li Institute of Automation, Chinese Academy of Sciences (CASIA)

DOI:

https://doi.org/10.1609/aaai.v31i1.11224

Keywords:

unsupervised learning, multi-level descriptors, person re-identification

Abstract

In this paper, we propose a novel coding method named weighted linear coding (WLC) to learn multi-level (e.g., pixel-level, patch-level and image-level) descriptors from raw pixel data in an unsupervised manner. It guarantees the property of saliency with a similarity constraint. The resulting multi-level descriptors have a good balance between the robustness and distinctiveness. Based on WLC, all data from the same region can be jointly encoded. Consequently, when we extract the holistic image features, it is able to preserve the spatial consistency. Furthermore, we apply PCA to these features and compact person representations are then achieved. During the stage of matching persons, we exploit the complementary information resided in multi-level descriptors via a score-level fusion strategy. Experiments on the challenging person re-identification datasets - VIPeR and CUHK 01, demonstrate the effectiveness of our method.

Downloads

Published

2017-02-12

How to Cite

Yang, Y., Wen, L., Lyu, S., & Li, S. (2017). Unsupervised Learning of Multi-Level Descriptors for Person Re-Identification. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.11224