Uncertainty Quantification for Data-Driven Change-Point Learning via Cross-Validation

Authors

  • Hui Chen Jiangsu Normal University
  • Yinxu Jia Nankai University
  • Guanghui Wang East China Normal University
  • Changliang Zou Nankai University

DOI:

https://doi.org/10.1609/aaai.v38i10.29008

Keywords:

ML: Calibration & Uncertainty Quantification, ML: Auto ML and Hyperparameter Tuning

Abstract

Accurately detecting multiple change-points is critical for various applications, but determining the optimal number of change-points remains a challenge. Existing approaches based on information criteria attempt to balance goodness-of-fit and model complexity, but their performance varies depending on the model. Recently, data-driven selection criteria based on cross-validation has been proposed, but these methods can be prone to slight overfitting in finite samples. In this paper, we introduce a method that controls the probability of overestimation and provides uncertainty quantification for learning multiple change-points via cross-validation. We frame this problem as a sequence of model comparison problems and leverage high-dimensional inferential procedures. We demonstrate the effectiveness of our approach through experiments on finite-sample data, showing superior uncertainty quantification for overestimation compared to existing methods. Our approach has broad applicability and can be used in diverse change-point models.

Downloads

Published

2024-03-24

How to Cite

Chen, H., Jia, Y., Wang, G., & Zou, C. (2024). Uncertainty Quantification for Data-Driven Change-Point Learning via Cross-Validation. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 11294-11301. https://doi.org/10.1609/aaai.v38i10.29008

Issue

Section

AAAI Technical Track on Machine Learning I