Leaving the Nest: Going beyond Local Loss Functions for Predict-Then-Optimize

Authors

  • Sanket Shah Harvard University
  • Bryan Wilder Carnegie Mellon University
  • Andrew Perrault The Ohio State University
  • Milind Tambe Harvard University

DOI:

https://doi.org/10.1609/aaai.v38i13.29410

Keywords:

ML: Classification and Regression, PRS: Planning under Uncertainty, RU: Decision/Utility Theory

Abstract

Predict-then-Optimize is a framework for using machine learning to perform decision-making under uncertainty. The central research question it asks is, "How can we use the structure of a decision-making task to tailor ML models for that specific task?" To this end, recent work has proposed learning task-specific loss functions that capture this underlying structure. However, current approaches make restrictive assumptions about the form of these losses and their impact on ML model behavior. These assumptions both lead to approaches with high computational cost, and when they are violated in practice, poor performance. In this paper, we propose solutions to these issues, avoiding the aforementioned assumptions and utilizing the ML model's features to increase the sample efficiency of learning loss functions. We empirically show that our method achieves state-of-the-art results in four domains from the literature, often requiring an order of magnitude fewer samples than comparable methods from past work. Moreover, our approach outperforms the best existing method by nearly 200% when the localness assumption is broken.

Downloads

Published

2024-03-24

How to Cite

Shah, S., Wilder, B., Perrault, A., & Tambe, M. (2024). Leaving the Nest: Going beyond Local Loss Functions for Predict-Then-Optimize. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 14902-14909. https://doi.org/10.1609/aaai.v38i13.29410

Issue

Section

AAAI Technical Track on Machine Learning IV