Meta-Learning PAC-Bayes Priors in Model Averaging

Authors

  • Yimin Huang Huawei Noah's Ark Lab
  • Weiran Huang Huawei Noah's Ark Lab
  • Liang Li Huawei Noah's Ark Lab
  • Zhenguo Li Huawei Noah's Ark Lab

DOI:

https://doi.org/10.1609/aaai.v34i04.5841

Abstract

Nowadays model uncertainty has become one of the most important problems in both academia and industry. In this paper, we mainly consider the scenario in which we have a common model set used for model averaging instead of selecting a single final model via a model selection procedure to account for this model's uncertainty in order to improve reliability and accuracy of inferences. Here one main challenge is to learn the prior over the model set. To tackle this problem, we propose two data-based algorithms to get proper priors for model averaging. One is for meta-learner, the analysts should use historical similar tasks to extract the information about the prior. The other one is for base-learner, a subsampling method is used to deal with the data step by step. Theoretically, an upper bound of risk for our algorithm is presented to guarantee the performance of the worst situation. In practice, both methods perform well in simulations and real data studies, especially with poor quality data.

Downloads

Published

2020-04-03

How to Cite

Huang, Y., Huang, W., Li, L., & Li, Z. (2020). Meta-Learning PAC-Bayes Priors in Model Averaging. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 4198-4205. https://doi.org/10.1609/aaai.v34i04.5841

Issue

Section

AAAI Technical Track: Machine Learning