PenDer: Incorporating Shape Constraints via Penalized Derivatives

Authors

  • Akhil Gupta University of Illinois, Urbana-Champaign
  • Lavanya Marla University of Illinois, Urbana-Champaign
  • Ruoyu Sun University of Illinois, Urbana-Champaign
  • Naman Shukla Deepair LLC
  • Arinbjörn Kolbeinsson Imperial College London

DOI:

https://doi.org/10.1609/aaai.v35i13.17373

Keywords:

Accountability, Interpretability & Explainability, (Deep) Neural Network Algorithms, Optimization, Learning Human Values and Preferences

Abstract

When deploying machine learning models in the real-world, system designers may wish that models exhibit certain shape behavior, i.e., model outputs follow a particular shape with respect to input features. Trends such as monotonicity, convexity, diminishing or accelerating returns are some of the desired shapes. Presence of these shapes makes the model more interpretable for the system designers, and adequately fair for the customers. We notice that many such common shapes are related to derivatives, and propose a new approach, PenDer (Penalizing Derivatives), which incorporates these shape constraints by penalizing the derivatives. We further present an Augmented Lagrangian Method (ALM) to solve this constrained optimization problem. Experiments on three real-world datasets illustrate that even though both PenDer and state-of-the-art Lattice models achieve similar conformance to shape, PenDer captures better sensitivity of prediction with respect to intended features. We also demonstrate that PenDer achieves better test performance than Lattice while enforcing more desirable shape behavior.

Downloads

Published

2021-05-18

How to Cite

Gupta, A., Marla, L., Sun, R., Shukla, N., & Kolbeinsson, A. (2021). PenDer: Incorporating Shape Constraints via Penalized Derivatives. Proceedings of the AAAI Conference on Artificial Intelligence, 35(13), 11536-11544. https://doi.org/10.1609/aaai.v35i13.17373

Issue

Section

AAAI Technical Track on Philosophy and Ethics of AI