Right for Better Reasons: Training Differentiable Models by Constraining their Influence Functions


  • Xiaoting Shao TU Darmstadt
  • Arseny Skryagin TU Darmstadt
  • Wolfgang Stammer TU Darmstadt
  • Patrick Schramowski TU Darmstadt
  • Kristian Kersting TU Darmstadt


Ethics -- Bias, Fairness, Transparency & Privacy, Human-Computer Interaction, Classification and Regression


Explaining black-box models such as deep neural networks is becoming increasingly important as it helps to boost trust and debugging. Popular forms of explanations map the features to a vector indicating their individual importance to a decision on the instance-level. They can then be used to prevent the model from learning the wrong bias in data possibly due to ambiguity. For instance, Ross et al.'s ``right for the right reasons'' propagates user explanations backwards to the network by formulating differentiable constraints based on input gradients. Unfortunately, input gradients as well as many other widely used explanation methods form an approximation of the decision boundary and assume the underlying model to be fixed. Here, we demonstrate how to make use of influence functions---a well known robust statistic---in the constraints to correct the model’s behaviour more effectively. Our empirical evidence demonstrates that this ``right for better reasons''(RBR) considerably reduces the time to correct the classifier at training time and boosts the quality of explanations at inference time compared to input gradients. Besides, we also showcase the effectiveness of RBR in correcting "Clever Hans"-like behaviour in real, high-dimensional domain.




How to Cite

Shao, X., Skryagin, A., Stammer, W., Schramowski, P., & Kersting, K. (2021). Right for Better Reasons: Training Differentiable Models by Constraining their Influence Functions. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 9533-9540. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17148



AAAI Technical Track on Machine Learning IV