Domain Invariant Learning for Gaussian Processes and Bayesian Exploration

Authors

  • Xilong Zhao Shanghai Jiao Tong University
  • Siyuan Bian Shanghai Jiao Tong University
  • Yaoyun Zhang Shanghai Jiao Tong University
  • Yuliang Zhang Shanghai Jiao Tong University
  • Qinying Gu Shanghai Artificial Intelligence Laboratory
  • Xinbing Wang Shanghai Jiao Tong University
  • Chenghu Zhou Shanghai Jiao Tong University
  • Nanyang Ye Shanghai Jiao Tong University

DOI:

https://doi.org/10.1609/aaai.v38i15.29646

Keywords:

ML: Bayesian Learning

Abstract

Out-of-distribution (OOD) generalization has long been a challenging problem that remains largely unsolved. Gaussian processes (GP), as popular probabilistic model classes, especially in the small data regime, presume strong OOD generalization abilities. Surprisingly, their OOD generalization abilities have been under-explored before compared with other lines of GP research. In this paper, we identify that GP is not free from the problem and propose a domain invariant learning algorithm for Gaussian processes (DIL-GP) with a min-max optimization on the likelihood. DIL-GP discovers the heterogeneity in the data and forces invariance across partitioned subsets of data. We further extend the DIL-GP to improve Bayesian optimization's adaptability on changing environments. Numerical experiments demonstrate the superiority of DIL-GP for predictions on several synthetic and real-world datasets. We further demonstrate the effectiveness of the DIL-GP Bayesian optimization method on a PID parameters tuning experiment for a quadrotor. The full version and source code are available at: https://github.com/Billzxl/DIL-GP.

Published

2024-03-24

How to Cite

Zhao, X., Bian, S., Zhang, Y., Zhang, Y., Gu, Q., Wang, X., Zhou, C., & Ye, N. (2024). Domain Invariant Learning for Gaussian Processes and Bayesian Exploration. Proceedings of the AAAI Conference on Artificial Intelligence, 38(15), 17024-17032. https://doi.org/10.1609/aaai.v38i15.29646

Issue

Section

AAAI Technical Track on Machine Learning VI