Progressive One-shot Human Parsing


  • Haoyu He The University of Sydney
  • Jing Zhang The University of Sydney
  • Bhavani Thuraisingham The University of Texas at Dallas
  • Dacheng Tao The University of Sydney



Segmentation, Biometrics, Face, Gesture & Pose, Multi-class/Multi-label Learning & Extreme Classification, Representation Learning


Prior human parsing models are limited to parsing humans into classes pre-defined in the training data, which is not flexible to generalize to unseen classes, e.g., new clothing in fashion analysis. In this paper, we propose a new problem named one-shot human parsing (OSHP) that requires to parse human into an open set of reference classes defined by any single reference example. During training, only base classes defined in the training set are exposed, which can overlap with part of reference classes. In this paper, we devise a novel Progressive One-shot Parsing network (POPNet) to address two critical challenges , i.e., testing bias and small sizes. POPNet consists of two collaborative metric learning modules named Attention Guidance Module and Nearest Centroid Module, which can learn representative prototypes for base classes and quickly transfer the ability to unseen classes during testing, thereby reducing testing bias. Moreover, POPNet adopts a progressive human parsing framework that can incorporate the learned knowledge of parent classes at the coarse granularity to help recognize the descendant classes at the fine granularity, thereby handling the small sizes issue. Experiments on the ATR-OS benchmark tailored for OSHP demonstrate POPNet outperforms other representative one-shot segmentation models by large margins and establishes a strong baseline. Source code can be found at




How to Cite

He, H., Zhang, J., Thuraisingham, B., & Tao, D. (2021). Progressive One-shot Human Parsing. Proceedings of the AAAI Conference on Artificial Intelligence, 35(2), 1522-1530.



AAAI Technical Track on Computer Vision I