On the Efficacy of Small Self-Supervised Contrastive Models without Distillation Signals

Authors

  • Haizhou Shi Zhejiang University
  • Youcai Zhang OPPO Research Institute
  • Siliang Tang Zhejiang University
  • Wenjie Zhu New York University
  • Yaqian Li OPPO Research Institute
  • Yandong Guo OPPO Research Institute
  • Yueting Zhuang Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v36i2.20120

Keywords:

Computer Vision (CV)

Abstract

It is a consensus that small models perform quite poorly under the paradigm of self-supervised contrastive learning. Existing methods usually adopt a large off-the-shelf model to transfer knowledge to the small one via distillation. Despite their effectiveness, distillation-based methods may not be suitable for some resource-restricted scenarios due to the huge computational expenses of deploying a large model. In this paper, we study the issue of training self-supervised small models without distillation signals. We first evaluate the representation spaces of the small models and make two non-negligible observations: (i) the small models can complete the pretext task without overfitting despite their limited capacity and (ii) they universally suffer the problem of over clustering. Then we verify multiple assumptions that are considered to alleviate the over-clustering phenomenon. Finally, we combine the validated techniques and improve the baseline performances of five small architectures with considerable margins, which indicates that training small self-supervised contrastive models is feasible even without distillation signals. The code is available at https://github.com/WOWNICE/ssl-small.

Downloads

Published

2022-06-28

How to Cite

Shi, H., Zhang, Y., Tang, S., Zhu, W., Li, Y., Guo, Y., & Zhuang, Y. (2022). On the Efficacy of Small Self-Supervised Contrastive Models without Distillation Signals. Proceedings of the AAAI Conference on Artificial Intelligence, 36(2), 2225-2234. https://doi.org/10.1609/aaai.v36i2.20120

Issue

Section

AAAI Technical Track on Computer Vision II