A Constraint-Based Approach to Learning and Explanation

Authors

  • Gabriele Ciravegna University of Florence
  • Francesco Giannini University of Siena
  • Stefano Melacci University of Siena
  • Marco Maggini University of Siena
  • Marco Gori University of Siena

DOI:

https://doi.org/10.1609/aaai.v34i04.5774

Abstract

In the last few years we have seen a remarkable progress from the cultivation of the idea of expressing domain knowledge by the mathematical notion of constraint. However, the progress has mostly involved the process of providing consistent solutions with a given set of constraints, whereas learning “new” constraints, that express new knowledge, is still an open challenge. In this paper we propose a novel approach to learning of constraints which is based on information theoretic principles. The basic idea consists in maximizing the transfer of information between task functions and a set of learnable constraints, implemented using neural networks subject to L1 regularization. This process leads to the unsupervised development of new constraints that are fulfilled in different sub-portions of the input domain. In addition, we define a simple procedure that can explain the behaviour of the newly devised constraints in terms of First-Order Logic formulas, thus extracting novel knowledge on the relationships between the original tasks. An experimental evaluation is provided to support the proposed approach, in which we also explore the regularization effects introduced by the proposed Information-Based Learning of Constraint (IBLC) algorithm.

Downloads

Published

2020-04-03

How to Cite

Ciravegna, G., Giannini, F., Melacci, S., Maggini, M., & Gori, M. (2020). A Constraint-Based Approach to Learning and Explanation. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3658-3665. https://doi.org/10.1609/aaai.v34i04.5774

Issue

Section

AAAI Technical Track: Machine Learning