Local Explanations for Reinforcement Learning

Authors

  • Ronny Luss IBM Research
  • Amit Dhurandhar IBM Research
  • Miao Liu IBM Research

DOI:

https://doi.org/10.1609/aaai.v37i7.26081

Keywords:

ML: Transparent, Interpretable, Explainable ML

Abstract

Many works in explainable AI have focused on explaining black-box classification models. Explaining deep reinforcement learning (RL) policies in a manner that could be understood by domain users has received much less attention. In this paper, we propose a novel perspective to understanding RL policies based on identifying important states from automatically learned meta-states. The key conceptual difference between our approach and many previous ones is that we form meta-states based on locality governed by the expert policy dynamics rather than based on similarity of actions, and that we do not assume any particular knowledge of the underlying topology of the state space. Theoretically, we show that our algorithm to find meta-states converges and the objective that selects important states from each meta-state is submodular leading to efficient high quality greedy selection. Experiments on four domains (four rooms, door-key, minipacman, and pong) and a carefully conducted user study illustrate that our perspective leads to better understanding of the policy. We conjecture that this is a result of our meta-states being more intuitive in that the corresponding important states are strong indicators of tractable intermediate goals that are easier for humans to interpret and follow.

Downloads

Published

2023-06-26

How to Cite

Luss, R., Dhurandhar, A., & Liu, M. (2023). Local Explanations for Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 9002-9010. https://doi.org/10.1609/aaai.v37i7.26081

Issue

Section

AAAI Technical Track on Machine Learning II