Learning Invariant Representations using Inverse Contrastive Loss

Authors

  • Aditya Kumar Akash University of Wisconsin Madison
  • Vishnu Suresh Lokhande University of Wisconsin-Madison
  • Sathya N. Ravi University of Illinois at Chicago
  • Vikas Singh University of Wisconsin Madison

DOI:

https://doi.org/10.1609/aaai.v35i8.16815

Keywords:

Representation Learning, Adversarial Learning & Robustness, Applications, Ethics -- Bias, Fairness, Transparency & Privacy

Abstract

Learning invariant representations is a critical first step in a number of machine learning tasks. A common approach is given by the so-called information bottleneck principle in which an application dependent function of mutual information is carefully chosen and optimized. Unfortunately, in practice, these functions are not suitable for optimization purposes since these losses are agnostic of the metric structure of the parameters of the model. In our paper, we introduce a class of losses for learning representations that are invariant to some extraneous variable of interest by inverting the class of contrastive losses, i.e., inverse contrastive loss (ICL). We show that if the extraneous variable is binary, then optimizing ICL is equivalent to optimizing a regularized MMD divergence. More generally, we also show that if we are provided a metric on the sample space, our formulation of ICL can be decomposed into a sum of convex functions of the given distance metric. Our experimental results indicate that models obtained by optimizing ICL achieve significantly better invariance to the extraneous variable for a fixed desired level of accuracy. In a variety of experimental settings, we show applicability of ICL for learning invariant representations for both continuous and discrete protected/extraneous variables. The project page with code is available at https://github.com/adityakumarakash/ICL

Downloads

Published

2021-05-18

How to Cite

Akash, A. K., Lokhande, V. . S., Ravi, S. N., & Singh, V. (2021). Learning Invariant Representations using Inverse Contrastive Loss. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 6582-6591. https://doi.org/10.1609/aaai.v35i8.16815

Issue

Section

AAAI Technical Track on Machine Learning I