Learning Incremental Triplet Margin for Person Re-Identification

Authors

  • Yingying Zhang Hikvision Research Institute
  • Qiaoyong Zhong Hikvision Research Institute
  • Liang Ma Hikvision Research Institute
  • Di Xie Hikvision Research Institute
  • Shiliang Pu Hikvision Research Institute

DOI:

https://doi.org/10.1609/aaai.v33i01.33019243

Abstract

Person re-identification (ReID) aims to match people across multiple non-overlapping video cameras deployed at different locations. To address this challenging problem, many metric learning approaches have been proposed, among which triplet loss is one of the state-of-the-arts. In this work, we explore the margin between positive and negative pairs of triplets and prove that large margin is beneficial. In particular, we propose a novel multi-stage training strategy which learns incremental triplet margin and improves triplet loss effectively. Multiple levels of feature maps are exploited to make the learned features more discriminative. Besides, we introduce global hard identity searching method to sample hard identities when generating a training batch. Extensive experiments on Market-1501, CUHK03, and DukeMTMCreID show that our approach yields a performance boost and outperforms most existing state-of-the-art methods.

Downloads

Published

2019-07-17

How to Cite

Zhang, Y., Zhong, Q., Ma, L., Xie, D., & Pu, S. (2019). Learning Incremental Triplet Margin for Person Re-Identification. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 9243-9250. https://doi.org/10.1609/aaai.v33i01.33019243

Issue

Section

AAAI Technical Track: Vision