MARTA: Leveraging Human Rationales for Explainable Text Classification

Authors

  • Ines Arous University of Fribourg
  • Ljiljana Dolamic armasuisse
  • Jie Yang Delft University of Technology
  • Akansha Bhardwaj University of Fribourg
  • Giuseppe Cuccu University of Fribourg
  • Philippe Cudré-Mauroux University of Fribourg

DOI:

https://doi.org/10.1609/aaai.v35i7.16734

Keywords:

Human-in-the-loop Machine Learning, Learning of Cost, Reliability, and Skill of Label, Probabilistic Graphical Models, Accountability, Interpretability & Explainability

Abstract

Explainability is a key requirement for text classification in many application domains ranging from sentiment analysis to medical diagnosis or legal reviews. Existing methods often rely on "attention" mechanisms for explaining classification results by estimating the relative importance of input units. However, recent studies have shown that such mechanisms tend to mis-identify irrelevant input units in their explanation. In this work, we propose a hybrid human-AI approach that incorporates human rationales into attention-based text classification models to improve the explainability of classification results. Specifically, we ask workers to provide rationales for their annotation by selecting relevant pieces of text. We introduce MARTA, a Bayesian framework that jointly learns an attention-based model and the reliability of workers while injecting human rationales into model training. We derive a principled optimization algorithm based on variational inference with efficient updating rules for learning MARTA parameters. Extensive validation on real-world datasets shows that our framework significantly improves the state of the art both in terms of classification explainability and accuracy.

Downloads

Published

2021-05-18

How to Cite

Arous, I., Dolamic, L., Yang, J., Bhardwaj, A., Cuccu, G., & Cudré-Mauroux, P. (2021). MARTA: Leveraging Human Rationales for Explainable Text Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 35(7), 5868-5876. https://doi.org/10.1609/aaai.v35i7.16734

Issue

Section

AAAI Technical Track on Humans and AI