Multi-Scale 3D Convolution Network for Video Based Person Re-Identification


  • Jianing Li Peking University
  • Shiliang Zhang Beijing University
  • Tiejun Huang Peking University



This paper proposes a two-stream convolution network to extract spatial and temporal cues for video based person ReIdentification (ReID). A temporal stream in this network is constructed by inserting several Multi-scale 3D (M3D) convolution layers into a 2D CNN network. The resulting M3D convolution network introduces a fraction of parameters into the 2D CNN, but gains the ability of multi-scale temporal feature learning. With this compact architecture, M3D convolution network is also more efficient and easier to optimize than existing 3D convolution networks. The temporal stream further involves Residual Attention Layers (RAL) to refine the temporal features. By jointly learning spatial-temporal attention masks in a residual manner, RAL identifies the discriminative spatial regions and temporal cues. The other stream in our network is implemented with a 2D CNN for spatial feature extraction. The spatial and temporal features from two streams are finally fused for the video based person ReID. Evaluations on three widely used benchmarks datasets, i.e.,MARS, PRID2011, and iLIDS-VID demonstrate the substantial advantages of our method over existing 3D convolution networks and state-of-art methods.




How to Cite

Li, J., Zhang, S., & Huang, T. (2019). Multi-Scale 3D Convolution Network for Video Based Person Re-Identification. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8618-8625.



AAAI Technical Track: Vision