Radial and Directional Posteriors for Bayesian Deep Learning

Authors

  • Changyong Oh Max-Planck-Institute for Intelligent Systems
  • Kamil Adamczewski Max-Planck-Institute for Intelligent Systems
  • Mijung Park Max-Planck-Institute for Intelligent Systems

DOI:

https://doi.org/10.1609/aaai.v34i04.5976

Abstract

We propose a new variational family for Bayesian neural networks. We decompose the variational posterior into two components, where the radial component captures the strength of each neuron in terms of its magnitude; while the directional component captures the statistical dependencies among the weight parameters. The dependencies learned via the directional density provide better modeling performance compared to the widely-used Gaussian mean-field-type variational family. In addition, the strength of input and output neurons learned via our posterior provides a structured way to compress neural networks. Indeed, experiments show that our variational family improves predictive performance and yields compressed networks simultaneously.

Downloads

Published

2020-04-03

How to Cite

Oh, C., Adamczewski, K., & Park, M. (2020). Radial and Directional Posteriors for Bayesian Deep Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 5298-5305. https://doi.org/10.1609/aaai.v34i04.5976

Issue

Section

AAAI Technical Track: Machine Learning