Visual-Semantic Graph Reasoning for Pedestrian Attribute Recognition

Authors

  • Qiaozhe Li Chinese Academy of Sciences
  • Xin Zhao Chinese Academy of Sciences
  • Ran He Chinese Academy of Sciences
  • Kaiqi Huang Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v33i01.33018634

Abstract

Pedestrian attribute recognition in surveillance is a challenging task due to poor image quality, significant appearance variations and diverse spatial distribution of different attributes. This paper treats pedestrian attribute recognition as a sequential attribute prediction problem and proposes a novel visual-semantic graph reasoning framework to address this problem. Our framework contains a spatial graph and a directed semantic graph. By performing reasoning using the Graph Convolutional Network (GCN), one graph captures spatial relations between regions and the other learns potential semantic relations between attributes. An end-to-end architecture is presented to perform mutual embedding between these two graphs to guide the relational learning for each other. We verify the proposed framework on three large scale pedestrian attribute datasets including PETA, RAP, and PA100k. Experiments show superiority of the proposed method over state-of-the-art methods and effectiveness of our joint GCN structures for sequential attribute prediction.

Downloads

Published

2019-07-17

How to Cite

Li, Q., Zhao, X., He, R., & Huang, K. (2019). Visual-Semantic Graph Reasoning for Pedestrian Attribute Recognition. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 8634-8641. https://doi.org/10.1609/aaai.v33i01.33018634

Issue

Section

AAAI Technical Track: Vision