Machine Learning for Online Algorithm Selection under Censored Feedback
Keywords:Search And Optimization (SO)
AbstractIn online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms. For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime. As the latter is known to exhibit a heavy-tail distribution, an algorithm is normally stopped when exceeding a predefined upper time limit. As a consequence, machine learning methods used to optimize an algorithm selection strategy in a data-driven manner need to deal with right-censored samples, a problem that has received little attention in the literature so far. In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem. Moreover, we adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon. In an extensive experimental evaluation on an adapted version of the ASlib benchmark, we demonstrate that theoretically well-founded methods based on Thompson sampling perform specifically strong and improve in comparison to existing methods.
How to Cite
Tornede, A., Bengs, V., & Hüllermeier, E. (2022). Machine Learning for Online Algorithm Selection under Censored Feedback. Proceedings of the AAAI Conference on Artificial Intelligence, 36(9), 10370-10380. https://doi.org/10.1609/aaai.v36i9.21279
AAAI Technical Track on Search and Optimization