Leveraging Features and Networks for Probabilistic Tensor Decomposition

Authors

  • Piyush Rai Duke University
  • Yingjian Wang PhD Student
  • Lawrence Carin Professor

DOI:

https://doi.org/10.1609/aaai.v29i1.9582

Keywords:

bayesian methods, tensor decomposition

Abstract

We present a probabilistic model for tensor decomposition where one or more tensor modes may have side-information about the mode entities in form of their features and/or their adjacency network. We consider a Bayesian approach based on the Canonical PARAFAC (CP) decomposition and enrich this single-layer decomposition approach with a two-layer decomposition. The second layer fits a factor model for each layer-one factor matrix and models the factor matrix via the mode entities' features and/or the network between the mode entities. The second-layer decomposition of each factor matrix also learns a binary latent representation for the entities of that mode, which can be useful in its own right. Our model can handle both continuous as well as binary tensor observations. Another appealing aspect of our model is the simplicity of the model inference, with easy-to-sample Gibbs updates. We demonstrate the results of our model on several benchmarks datasets, consisting of both real and binary tensors.

Downloads

Published

2015-02-21

How to Cite

Rai, P., Wang, Y., & Carin, L. (2015). Leveraging Features and Networks for Probabilistic Tensor Decomposition. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9582

Issue

Section

Main Track: Novel Machine Learning Algorithms