Stable Prediction with Model Misspecification and Agnostic Distribution Shift

Authors

  • Kun Kuang Zhejiang University
  • Ruoxuan Xiong Stanford University
  • Peng Cui Tsinghua University
  • Susan Athey Stanford University
  • Bo Li Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v34i04.5876

Abstract

For many machine learning algorithms, two main assumptions are required to guarantee performance. One is that the test data are drawn from the same distribution as the training data, and the other is that the model is correctly specified. In real applications, however, we often have little prior knowledge on the test data and on the underlying true model. Under model misspecification, agnostic distribution shift between training and test data leads to inaccuracy of parameter estimation and instability of prediction across unknown test data. To address these problems, we propose a novel Decorrelated Weighting Regression (DWR) algorithm which jointly optimizes a variable decorrelation regularizer and a weighted regression model. The variable decorrelation regularizer estimates a weight for each sample such that variables are decorrelated on the weighted training data. Then, these weights are used in the weighted regression to improve the accuracy of estimation on the effect of each variable, thus help to improve the stability of prediction across unknown test data. Extensive experiments clearly demonstrate that our DWR algorithm can significantly improve the accuracy of parameter estimation and stability of prediction with model misspecification and agnostic distribution shift.

Downloads

Published

2020-04-03

How to Cite

Kuang, K., Xiong, R., Cui, P., Athey, S., & Li, B. (2020). Stable Prediction with Model Misspecification and Agnostic Distribution Shift. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 4485-4492. https://doi.org/10.1609/aaai.v34i04.5876

Issue

Section

AAAI Technical Track: Machine Learning