Near-Optimal Active Learning of Multi-Output Gaussian Processes

Authors

  • Yehong Zhang National University of Singapore
  • Trong Nghia Hoang National University of Singapore
  • Kian Hsiang Low National University of Singapore
  • Mohan Kankanhalli National University of Singapore

DOI:

https://doi.org/10.1609/aaai.v30i1.10209

Keywords:

Active learning, Gaussian process, multi-output Gaussian process

Abstract

This paper addresses the problem of active learning of a multi-output Gaussian process (MOGP) model representing multiple types of coexisting correlated environmental phenomena. In contrast to existing works, our active learning problem involves selecting not just the most informative sampling locations to be observed but also the types of measurements at each selected location for minimizing the predictive uncertainty (i.e., posterior joint entropy) of a target phenomenon of interest given a sampling budget. Unfortunately, such an entropy criterion scales poorly in the numbers of candidate sampling locations and selected observations when optimized. To resolve this issue, we first exploit a structure common to sparse MOGP models for deriving a novel active learning criterion. Then, we exploit a relaxed form of submodularity property of our new criterion for devising a polynomial-time approximation algorithm that guarantees a constant-factor approximation of that achieved by the optimal set of selected observations. Empirical evaluation on real-world datasets shows that our proposed approach outperforms existing algorithms for active learning of MOGP and single-output GP models.

Downloads

Published

2016-03-02

How to Cite

Zhang, Y., Hoang, T. N., Low, K. H., & Kankanhalli, M. (2016). Near-Optimal Active Learning of Multi-Output Gaussian Processes. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10209

Issue

Section

Technical Papers: Machine Learning Methods