Improved Mutual Information Estimation

Authors

  • Youssef Mroueh IBM Research AI
  • Igor Melnyk IBM Research AI
  • Pierre Dognin IBM Research AI
  • Jarret Ross IBM Research AI
  • Tom Sercu IBM Research AI

DOI:

https://doi.org/10.1609/aaai.v35i10.17089

Keywords:

Kernel Methods

Abstract

We propose to estimate the KL divergence using a relaxed likelihood ratio estimation in a Reproducing Kernel Hilbert space. We show that the dual of our ratio estimator for KL in the particular case of Mutual Information estimation corresponds to a lower bound on the MI that is related to the so called Donsker Varadhan lower bound. In this dual form, MI is estimated via learning a witness function discriminating between the joint density and the product of marginal, as well as an auxiliary scalar variable that enforces a normalization constraint on the likelihood ratio. By extending the function space to neural networks, we propose an efficient neural MI estimator, and validate its performance on synthetic examples, showing advantage over the existing baselines. We demonstrate its strength in large-scale self-supervised representation learning through MI maximization.

Downloads

Published

2021-05-18

How to Cite

Mroueh, Y., Melnyk, I., Dognin, P., Ross, J., & Sercu, T. (2021). Improved Mutual Information Estimation. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 9009-9017. https://doi.org/10.1609/aaai.v35i10.17089

Issue

Section

AAAI Technical Track on Machine Learning III