Learning Compositional Sparse Gaussian Processes with a Shrinkage Prior

Authors

  • Anh Tong Ulsan National Institute of Science and Technology
  • Toan M Tran VinAI Research
  • Hung Bui VinAI Research
  • Jaesik Choi Korea Advanced Institute of Science and Technology INEEJI

DOI:

https://doi.org/10.1609/aaai.v35i11.17190

Keywords:

Bayesian Learning

Abstract

Choosing a proper set of kernel functions is an important problem in learning Gaussian Process (GP) models since each kernel structure has different model complexity and data fitness. Recently, automatic kernel composition methods provide not only accurate prediction but also attractive interpretability through search-based methods. However, existing methods suffer from slow kernel composition learning. To tackle large-scaled data, we propose a new sparse approximate posterior for GPs, MultiSVGP, constructed from groups of inducing points associated with individual additive kernels in compositional kernels. We demonstrate that this approximation provides a better fit to learn compositional kernels given empirical observations. We also provide theoretically justification on error bound when compared to the traditional sparse GP. In contrast to the search-based approach, we present a novel probabilistic algorithm to learn a kernel composition by handling the sparsity in the kernel selection with Horseshoe prior. We demonstrate that our model can capture characteristics of time series with significant reductions in computational time and have competitive regression performance on real-world data sets.

Downloads

Published

2021-05-18

How to Cite

Tong, A., Tran, T. M., Bui, H., & Choi, J. (2021). Learning Compositional Sparse Gaussian Processes with a Shrinkage Prior. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 9906-9914. https://doi.org/10.1609/aaai.v35i11.17190

Issue

Section

AAAI Technical Track on Machine Learning IV