Sample-Specific Output Constraints for Neural Networks

Authors

  • Mathis Brosowsky FZI Research Center for Information Technology Karlsruhe Institute of Technology
  • Florian Keck Karlsruhe Institute of Technology
  • Olaf Dünkel Karlsruhe Institute of Technology
  • Marius Zöllner FZI Research Center for Information Technology Karlsruhe Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v35i8.16841

Keywords:

(Deep) Neural Network Algorithms, Reinforcement Learning

Abstract

It is common practice to constrain the output space of a neural network with the final layer to a problem-specific value range. However, for many tasks it is desired to restrict the output space for each input independently to a different subdomain with a non-trivial geometry, e.g. in safety-critical applications, to exclude hazardous outputs sample-wise. We propose ConstraintNet—a scalable neural network architecture which constrains the output space in each forward pass independently. Contrary to prior approaches, which perform a projection in the final layer, ConstraintNet applies an input-dependent parametrization of the constrained output space. Thereby, the complete interior of the constrained region is covered and computational costs are reduced significantly. For constraints in form of convex polytopes, we leverage the vertex representation to specify the parametrization. The second modification consists of adding an auxiliary input in form of a tensor description of the constraint to enable the handling of multiple constraints for the same sample. Finally, ConstraintNet is end-to-end trainable with almost no overhead in the forward and backward pass. We demonstrate ConstraintNet on two regression tasks: First, we modify a CNN and construct several constraints for facial landmark detection tasks. Second, we demonstrate the application to a follow object controller for vehicles and accomplish safe reinforcement learning in this case. In both experiments, ConstraintNet improves performance and we conclude that our approach is promising for applying neural networks in safety-critical environments.

Downloads

Published

2021-05-18

How to Cite

Brosowsky, M., Keck, F., Dünkel, O., & Zöllner, M. (2021). Sample-Specific Output Constraints for Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 6812-6821. https://doi.org/10.1609/aaai.v35i8.16841

Issue

Section

AAAI Technical Track on Machine Learning I