Accurate and Robust Feature Importance Estimation under Distribution Shifts
Keywords:(Deep) Neural Network Algorithms
AbstractWith increasing reliance on the outcomes of black-box models in critical applications, post-hoc explainability tools that do not require access to the model internals are often used to enable humans understand and trust these models. In particular, we focus on the class of methods that can reveal the influence of input features on the predicted outputs. Despite their wide-spread adoption, existing methods are known to suffer from one or more of the following challenges: computational complexities, large uncertainties and most importantly, inability to handle real-world domain shifts. In this paper, we propose PRoFILE (Producing Robust Feature Importances using Loss Estimates), a novel feature importance estimation method that addresses all these challenges. Through the use of a loss estimator jointly trained with the predictive model and a causal objective, PRoFILE can accurately estimate the feature importance scores even under complex distribution shifts, without any additional re-training. To this end, we also develop learning strategies for training the loss estimator, namely contrastive and dropout calibration, and find that it can effectively detect distribution shifts. Using empirical studies on several benchmark image and non-image data, we show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
How to Cite
J. Thiagarajan, J., Narayanaswamy, V., Anirudh, R., Bremer, P.-T., & Spanias, A. (2021). Accurate and Robust Feature Importance Estimation under Distribution Shifts. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7891-7898. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16963
AAAI Technical Track on Machine Learning II