Selecting Near-Optimal Learners via Incremental Data Allocation

Authors

  • Ashish Sabharwal Allen Institute for AI
  • Horst Samulowitz IBM T. J. Watson Research Center
  • Gerald Tesauro IBM T. J. Watson Research Center

DOI:

https://doi.org/10.1609/aaai.v30i1.10316

Keywords:

classifier selection, bandit algorithms, big data

Abstract

We study a novel machine learning (ML) problem setting of sequentially allocating small subsets of training data amongst a large set of classifiers. The goal is to select a classifier that will give near-optimal accuracy when trained on all data, while also minimizing the cost of misallocated samples. This is motivated by large modern datasets and ML toolkits with many combinations of learning algorithms and hyper-parameters. Inspired by the principle of "optimism under uncertainty," we propose an innovative strategy, Data Allocation using Upper Bounds (DAUB), which robustly achieves these objectives across a variety of real-world datasets. We further develop substantial theoretical support for DAUB in an idealized setting where the expected accuracy of a classifier trained on $n$ samples can be known exactly. Under these conditions we establish a rigorous sub-linear bound on the regret of the approach (in terms of misallocated data), as well as a rigorous bound on suboptimality of the selected classifier. Our accuracy estimates using real-world datasets only entail mild violations of the theoretical scenario, suggesting that the practical behavior of DAUB is likely to approach the idealized behavior.

Downloads

Published

2016-03-02

How to Cite

Sabharwal, A., Samulowitz, H., & Tesauro, G. (2016). Selecting Near-Optimal Learners via Incremental Data Allocation. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10316

Issue

Section

Technical Papers: Machine Learning Methods