Gradient Estimation for Binary Latent Variables via Gradient Variance Clipping

Authors

  • Russell Z. Kunes Department of Statistics, Columbia University Computational and Systems Biology, Memorial Sloan Kettering Cancer Center Irving Institute of Cancer Dynamics, Columbia University
  • Mingzhang Yin Irving Institute of Cancer Dynamics, Columbia University Warrington College of Business, University of Florida
  • Max Land Computational and Systems Biology, Memorial Sloan Kettering Cancer Center
  • Doron Haviv Computational and Systems Biology, Memorial Sloan Kettering Cancer Center
  • Dana Pe'er Computational and Systems Biology, Memorial Sloan Kettering Cancer Center Howard Hughes Medical Institute
  • Simon Tavaré Department of Statistics, Columbia University Irving Institute of Cancer Dynamics, Columbia University

DOI:

https://doi.org/10.1609/aaai.v37i7.26013

Keywords:

ML: Deep Generative Models & Autoencoders, ML: Probabilistic Methods

Abstract

Gradient estimation is often necessary for fitting generative models with discrete latent variables, in contexts such as reinforcement learning and variational autoencoder (VAE) training. The DisARM estimator achieves state of the art gradient variance for Bernoulli latent variable models in many contexts. However, DisARM and other estimators have potentially exploding variance near the boundary of the parameter space, where solutions tend to lie. To ameliorate this issue, we propose a new gradient estimator bitflip-1 that is lower variance at the boundaries of the parameter space. As bitflip-1 has complementary properties to existing estimators, we introduce an aggregated estimator, unbiased gradient variance clipping (UGC) that uses either a bitflip-1 or a DisARM gradient update for each coordinate. We theoretically prove that UGC has uniformly lower variance than DisARM. Empirically, we observe that UGC achieves the optimal value of the optimization objectives in toy experiments, discrete VAE training, and in a best subset selection problem.

Downloads

Published

2023-06-26

How to Cite

Kunes, R. Z., Yin, M., Land, M., Haviv, D., Pe’er, D., & Tavaré, S. (2023). Gradient Estimation for Binary Latent Variables via Gradient Variance Clipping. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8405-8412. https://doi.org/10.1609/aaai.v37i7.26013

Issue

Section

AAAI Technical Track on Machine Learning II