Bridging Maximum Likelihood and Adversarial Learning via α-Divergence

Authors

  • Miaoyun Zhao Duke University
  • Yulai Cong Duke University
  • Shuyang Dai Duke University
  • Lawrence Carin Duke University

DOI:

https://doi.org/10.1609/aaai.v34i04.6172

Abstract

Maximum likelihood (ML) and adversarial learning are two popular approaches for training generative models, and from many perspectives these techniques are complementary. ML learning encourages the capture of all data modes, and it is typically characterized by stable training. However, ML learning tends to distribute probability mass diffusely over the data space, e.g., yielding blurry synthetic images. Adversarial learning is well known to synthesize highly realistic natural images, despite practical challenges like mode dropping and delicate training. We propose an α-Bridge to unify the advantages of ML and adversarial learning, enabling the smooth transfer from one to the other via the α-divergence. We reveal that generalizations of the α-Bridge are closely related to approaches developed recently to regularize adversarial learning, providing insights into that prior work, and further understanding of why the α-Bridge performs well in practice.

Downloads

Published

2020-04-03

How to Cite

Zhao, M., Cong, Y., Dai, S., & Carin, L. (2020). Bridging Maximum Likelihood and Adversarial Learning via α-Divergence. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6901-6908. https://doi.org/10.1609/aaai.v34i04.6172

Issue

Section

AAAI Technical Track: Machine Learning