Self-Supervised Primal-Dual Learning for Constrained Optimization


  • Seonho Park Georgia Institute of Technology
  • Pascal Van Hentenryck Georgia Institute of Technology



CSO: Constraint Optimization, CSO: Applications, CSO: Constraint Programming, CSO: Mixed Discrete/Continuous Optimization, APP: Energy, Environment & Sustainability, ML: Classification and Regression, ML: Optimization, ML: Unsupervised & Self-Supervised Learning, SO: Other Foundations of Search & Optimization


This paper studies how to train machine-learning models that directly approximate the optimal solutions of constrained optimization problems. This is an empirical risk minimization under constraints, which is challenging as training must balance optimality and feasibility conditions. Supervised learning methods often approach this challenge by training the model on a large collection of pre-solved instances. This paper takes a different route and proposes the idea of Primal-Dual Learning (PDL), a self-supervised training method that does not require a set of pre-solved instances or an optimization solver for training and inference. Instead, PDL mimics the trajectory of an Augmented Lagrangian Method (ALM) and jointly trains primal and dual neural networks. Being a primal-dual method, PDL uses instance-specific penalties of the constraint terms in the loss function used to train the primal network. Experiments show that, on a set of nonlinear optimization benchmarks, PDL typically exhibits negligible constraint violations and minor optimality gaps, and is remarkably close to the ALM optimization. PDL also demonstrated improved or similar performance in terms of the optimality gaps, constraint violations, and training times compared to existing approaches.




How to Cite

Park, S., & Van Hentenryck, P. (2023). Self-Supervised Primal-Dual Learning for Constrained Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 37(4), 4052-4060.



AAAI Technical Track on Constraint Satisfaction and Optimization