Can Semantic Labels Assist Self-Supervised Visual Representation Learning?

Authors

  • Longhui Wei University of Science and Technology of China Huawei Inc.
  • Lingxi Xie Huawei Inc.
  • Jianzhong He Huawei Inc.
  • Xiaopeng Zhang Huawei Inc.
  • Qi Tian Huawei Inc.

DOI:

https://doi.org/10.1609/aaai.v36i3.20166

Keywords:

Computer Vision (CV)

Abstract

Recently, contrastive learning has largely advanced the progress of unsupervised visual representation learning. Pre-trained on ImageNet, some self-supervised algorithms reported higher transfer learning performance compared to fully-supervised methods, seeming to deliver the message that human labels hardly contribute to learning transferrable visual features. In this paper, we defend the usefulness of semantic labels but point out that fully-supervised and self-supervised methods are pursuing different kinds of features. To alleviate this issue, we present a new algorithm named Supervised Contrastive Adjustment in Neighborhood (SCAN) that maximally prevents the semantic guidance from damaging the appearance feature embedding. In a series of downstream tasks, SCAN achieves superior performance compared to previous fully-supervised and self-supervised methods, and sometimes the gain is significant. More importantly, our study reveals that semantic labels are useful in assisting self-supervised methods, opening a new direction for the community.

Downloads

Published

2022-06-28

How to Cite

Wei, L., Xie, L., He, J., Zhang, X., & Tian, Q. (2022). Can Semantic Labels Assist Self-Supervised Visual Representation Learning?. Proceedings of the AAAI Conference on Artificial Intelligence, 36(3), 2642-2650. https://doi.org/10.1609/aaai.v36i3.20166

Issue

Section

AAAI Technical Track on Computer Vision III