Understanding Learned Models by Identifying Important Features at the Right Resolution


  • Kyubin Lee University of Wisconsin-Madison
  • Akshay Sood University of Wisconsin-Madison
  • Mark Craven University of Wisconsin-Madison




In many application domains, it is important to characterize how complex learned models make their decisions across the distribution of instances. One way to do this is to identify the features and interactions among them that contribute to a model’s predictive accuracy. We present a model-agnostic approach to this task that makes the following specific contributions. Our approach (i) tests feature groups, in addition to base features, and tries to determine the level of resolution at which important features can be determined, (ii) uses hypothesis testing to rigorously assess the effect of each feature on the model’s loss, (iii) employs a hierarchical approach to control the false discovery rate when testing feature groups and individual base features for importance, and (iv) uses hypothesis testing to identify important interactions among features and feature groups. We evaluate our approach by analyzing random forest and LSTM neural network models learned in two challenging biomedical applications.




How to Cite

Lee, K., Sood, A., & Craven, M. (2019). Understanding Learned Models by Identifying Important Features at the Right Resolution. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 4155-4163. https://doi.org/10.1609/aaai.v33i01.33014155



AAAI Technical Track: Machine Learning