A Divide and Conquer Algorithm for Predict+Optimize with Non-convex Problems

Authors

  • Ali Ugur Guler University of Melbourne
  • Emir Demirović Delft University of Technology
  • Jeffrey Chan RMIT University
  • James Bailey University of Melbourne
  • Christopher Leckie University of Melbourne
  • Peter J. Stuckey Monash University

DOI:

https://doi.org/10.1609/aaai.v36i4.20289

Keywords:

Constraint Satisfaction And Optimization (CSO), Machine Learning (ML)

Abstract

The predict+optimize problem combines machine learning and combinatorial optimization by predicting the problem coefficients first and then using these coefficients to solve the optimization problem. While this problem can be solved in two separate stages, recent research shows end to end models can achieve better results. This requires differentiating through a discrete combinatorial function. Models that use differentiable surrogates are prone to approximation errors, while existing exact models are limited to dynamic programming, or they do not generalize well with scarce data. In this work we propose a novel divide and conquer algorithm based on transition points to reason over exact optimization problems and predict the coefficients using the optimization loss. Moreover, our model is not limited to dynamic programming problems. We also introduce a greedy version, which achieves similar results with less computation. In comparison with other predict+optimize frameworks, we show our method outperforms existing exact frameworks and can reason over hard combinatorial problems better than surrogate methods.

Downloads

Published

2022-06-28

How to Cite

Guler, A. U., Demirović, E., Chan, J., Bailey, J., Leckie, C., & Stuckey, P. J. (2022). A Divide and Conquer Algorithm for Predict+Optimize with Non-convex Problems. Proceedings of the AAAI Conference on Artificial Intelligence, 36(4), 3749-3757. https://doi.org/10.1609/aaai.v36i4.20289

Issue

Section

AAAI Technical Track on Constraint Satisfaction and Optimization