Sum-Product Autoencoding: Encoding and Decoding Representations Using Sum-Product Networks

Authors

  • Antonio Vergari University of Bari
  • Robert Peharz University of Cambridge
  • Nicola Di Mauro University of Bari
  • Alejandro Molina TU Dortmund
  • Kristian Kersting TU Darmstadt
  • Floriana Esposito University of Bari

DOI:

https://doi.org/10.1609/aaai.v32i1.11734

Keywords:

sum-product networks, representation learning, tractable probabilistic models, unsupervised learning, autoencoders

Abstract

Sum-Product Networks (SPNs) are a deep probabilistic architecture that up to now has been successfully employed for tractable inference. Here, we extend their scope towards unsupervised representation learning: we encode samples into continuous and categorical embeddings and show that they can also be decoded back into the original input space by leveraging MPE inference. We characterize when this Sum-Product Autoencoding (SPAE) leads to equivalent reconstructions and extend it towards dealing with missing embedding information. Our experimental results on several multi-label classification problems demonstrate that SPAE is competitive with state-of-the-art autoencoder architectures, even if the SPNs were never trained to reconstruct their inputs.

Downloads

Published

2018-04-29

How to Cite

Vergari, A., Peharz, R., Di Mauro, N., Molina, A., Kersting, K., & Esposito, F. (2018). Sum-Product Autoencoding: Encoding and Decoding Representations Using Sum-Product Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11734