Learning to Optimize Computational Resources: Frugal Training with Generalization Guarantees

Authors

  • Maria-Florina Balcan Carnegie Mellon University
  • Tuomas Sandholm Carnegie Mellon University
  • Ellen Vitercik Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v34i04.5721

Abstract

Algorithms typically come with tunable parameters that have a considerable impact on the computational resources they consume. Too often, practitioners must hand-tune the parameters, a tedious and error-prone task. A recent line of research provides algorithms that return nearly-optimal parameters from within a finite set. These algorithms can be used when the parameter space is infinite by providing as input a random sample of parameters. This data-independent discretization, however, might miss pockets of nearly-optimal parameters: prior research has presented scenarios where the only viable parameters lie within an arbitrarily small region. We provide an algorithm that learns a finite set of promising parameters from within an infinite set. Our algorithm can help compile a configuration portfolio, or it can be used to select the input to a configuration algorithm for finite parameter spaces. Our approach applies to any configuration problem that satisfies a simple yet ubiquitous structure: the algorithm's performance is a piecewise constant function of its parameters. Prior research has exhibited this structure in domains from integer programming to clustering.

Downloads

Published

2020-04-03

How to Cite

Balcan, M.-F., Sandholm, T., & Vitercik, E. (2020). Learning to Optimize Computational Resources: Frugal Training with Generalization Guarantees. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3227-3234. https://doi.org/10.1609/aaai.v34i04.5721

Issue

Section

AAAI Technical Track: Machine Learning