Max-Margin Contrastive Learning

Authors

  • Anshul Shah Johns Hopkins University, Baltimore, MD
  • Suvrit Sra Massachusetts Institute of Technology, Cambridge, MA
  • Rama Chellappa Johns Hopkins University, Baltimore, MD
  • Anoop Cherian Mitsubishi Electric Research Labs, Cambridge, MA

DOI:

https://doi.org/10.1609/aaai.v36i8.20796

Keywords:

Machine Learning (ML), Computer Vision (CV)

Abstract

Standard contrastive learning approaches usually require a large number of negatives for effective unsupervised learning and often exhibit slow convergence. We suspect this behavior is due to the suboptimal selection of negatives used for offering contrast to the positives. We counter this difficulty by taking inspiration from support vector machines (SVMs) to present max-margin contrastive learning (MMCL). Our approach selects negatives as the sparse support vectors obtained via a quadratic optimization problem, and contrastiveness is enforced by maximizing the decision margin. As SVM optimization can be computationally demanding, especially in an end-to-end setting, we present simplifications that alleviate the computational burden. We validate our approach on standard vision benchmark datasets, demonstrating better performance in unsupervised representation learning over state-of-the-art, while having better empirical convergence properties.

Downloads

Published

2022-06-28

How to Cite

Shah, A., Sra, S., Chellappa, R., & Cherian, A. (2022). Max-Margin Contrastive Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 8220-8230. https://doi.org/10.1609/aaai.v36i8.20796

Issue

Section

AAAI Technical Track on Machine Learning III