Teaching the Old Dog New Tricks: Supervised Learning with Constraints
DOI:
https://doi.org/10.1609/aaai.v35i5.16491Keywords:
Constraint Satisfaction, Neuro-Symbolic AI (NSAI), Classification and RegressionAbstract
Adding constraint support in Machine Learning has the potential to address outstanding issues in data-driven AI systems, such as safety and fairness. Existing approaches typically apply constrained optimization techniques to ML training, enforce constraint satisfaction by adjusting the model design, or use constraints to correct the output. Here, we investigate a different, complementary, strategy based on "teaching" constraint satisfaction to a supervised ML method via the direct use of a state-of-the-art constraint solver: this enables taking advantage of decades of research on constrained optimization with limited effort. In practice, we use a decomposition scheme alternating master steps (in charge of enforcing the constraints) and learner steps (where any supervised ML model and training algorithm can be employed). The process leads to approximate constraint satisfaction in general, and convergence properties are difficult to establish; despite this fact, we found empirically that even a naive setup of our approach performs well on ML tasks with fairness constraints, and on classical datasets with synthetic constraints.Downloads
Published
2021-05-18
How to Cite
Detassis, F., Lombardi, M., & Milano, M. (2021). Teaching the Old Dog New Tricks: Supervised Learning with Constraints. Proceedings of the AAAI Conference on Artificial Intelligence, 35(5), 3742-3749. https://doi.org/10.1609/aaai.v35i5.16491
Issue
Section
AAAI Technical Track on Constraint Satisfaction and Optimization