Distilling Localization for Self-Supervised Representation Learning

Authors

  • Nanxuan Zhao City University of Hong Kong
  • Zhirong Wu Microsoft Research
  • Rynson W.H. Lau City University of Hong Kong
  • Stephen Lin Microsoft Research

Keywords:

Unsupervised & Self-Supervised Learning

Abstract

Recent progress in contrastive learning has revolutionized unsupervised representation learning. Concretely, multiple views (augmentations) from the same image are encouraged to map to close embeddings, while views from different images are pulled apart.In this paper, through visualizing and diagnosing classification errors, we observe that current contrastive models are ineffective at localizing the foreground object, limiting their ability to extract discriminative high-level features. This is due to the fact that view generation process considers pixels in an image uniformly.To address this problem, we propose a data-driven approach for learning invariance to backgrounds. It first estimates foreground saliency in images and then creates augmentations by copy-and-pasting the foreground onto a variety of back-grounds. The learning still follows an instance discrimination approach, so that the representation is trained to disregard background content and focus on the foreground. We study a variety of saliency estimation methods, and find that most methods lead to improvements for contrastive learning. With this approach, significant performance is achieved for self-supervised learning on ImageNet classification, and also for object detection on PASCAL VOC and MSCOCO.

Downloads

Published

2021-05-18

How to Cite

Zhao, N., Wu, Z., Lau, R. W., & Lin, S. (2021). Distilling Localization for Self-Supervised Representation Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10990-10998. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17312

Issue

Section

AAAI Technical Track on Machine Learning V