Appearance and Motion Enhancement for Video-Based Person Re-Identification

Authors

  • Shuzhao Li Zhejiang University
  • Huimin Yu Zhejiang University
  • Haoji Hu Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v34i07.6802

Abstract

In this paper, we propose an Appearance and Motion Enhancement Model (AMEM) for video-based person re-identification to enrich the two kinds of information contained in the backbone network in a more interpretable way. Concretely, human attribute recognition under the supervision of pseudo labels is exploited in an Appearance Enhancement Module (AEM) to help enrich the appearance and semantic information. A Motion Enhancement Module (MEM) is designed to capture the identity-discriminative walking patterns through predicting future frames. Despite a complex model with several auxiliary modules during training, only the backbone model plus two small branches are kept for similarity evaluation which constitute a simple but effective final model. Extensive experiments conducted on three popular video-based person ReID benchmarks demonstrate the effectiveness of our proposed model and the state-of-the-art performance compared with existing methods.

Downloads

Published

2020-04-03

How to Cite

Li, S., Yu, H., & Hu, H. (2020). Appearance and Motion Enhancement for Video-Based Person Re-Identification. Proceedings of the AAAI Conference on Artificial Intelligence, 34(07), 11394-11401. https://doi.org/10.1609/aaai.v34i07.6802

Issue

Section

AAAI Technical Track: Vision