Sharing Pattern Submodels for Prediction with Missing Values
DOI:
https://doi.org/10.1609/aaai.v37i8.26179Keywords:
ML: Classification and Regression, APP: Healthcare, Medicine & Wellness, ML: Transparent, Interpretable, Explainable ML, RU: Other Foundations of Reasoning Under UncertaintyAbstract
Missing values are unavoidable in many applications of machine learning and present challenges both during training and at test time. When variables are missing in recurring patterns, fitting separate pattern submodels have been proposed as a solution. However, fitting models independently does not make efficient use of all available data. Conversely, fitting a single shared model to the full data set relies on imputation which often leads to biased results when missingness depends on unobserved factors. We propose an alternative approach, called sharing pattern submodels (SPSM), which i) makes predictions that are robust to missing values at test time, ii) maintains or improves the predictive power of pattern submodels, and iii) has a short description, enabling improved interpretability. Parameter sharing is enforced through sparsity-inducing regularization which we prove leads to consistent estimation. Finally, we give conditions for when a sharing model is optimal, even when both missingness and the target outcome depend on unobserved variables. Classification and regression experiments on synthetic and real-world data sets demonstrate that our models achieve a favorable tradeoff between pattern specialization and information sharing.Downloads
Published
2023-06-26
How to Cite
Stempfle, L., Panahi, A., & Johansson, F. D. (2023). Sharing Pattern Submodels for Prediction with Missing Values. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9882-9890. https://doi.org/10.1609/aaai.v37i8.26179
Issue
Section
AAAI Technical Track on Machine Learning III