User Driven Model Adjustment via Boolean Rule Explanations

Authors

  • Elizabeth M. Daly IBM Research
  • Massimiliano Mattetti IBM Research
  • Öznur Alkan IBM Research
  • Rahul Nair IBM Research

DOI:

https://doi.org/10.1609/aaai.v35i7.16737

Keywords:

Human-in-the-loop Machine Learning

Abstract

AI solutions are heavily dependant on the quality and accuracy of the input training data, however the training data may not always fully reflect the most up-to-date policy landscape or may be missing business logic. The advances in explainability have opened the possibility of allowing users to interact with interpretable explanations of ML predictions in order to inject modifications or constraints that more accurately reflect current realities of the system. In this paper, we present a solution which leverages the predictive power of ML models while allowing the user to specify modifications to decision boundaries. Our interactive overlay approach achieves this goal without requiring model retraining, making it appropriate for systems that need to apply instant changes to their decision making. We demonstrate that user feedback rules can be layered with the ML predictions to provide immediate changes which in turn supports learning with less data.

Downloads

Published

2021-05-18

How to Cite

Daly, E. M., Mattetti, M., Alkan, Öznur, & Nair, R. (2021). User Driven Model Adjustment via Boolean Rule Explanations. Proceedings of the AAAI Conference on Artificial Intelligence, 35(7), 5896-5904. https://doi.org/10.1609/aaai.v35i7.16737

Issue

Section

AAAI Technical Track on Humans and AI