Permutation-Based Hypothesis Testing for Neural Networks

Authors

  • Francesca Mandel University of Pennsylvania
  • Ian Barnett University of Pennsylvania

DOI:

https://doi.org/10.1609/aaai.v38i13.29343

Keywords:

ML: Transparent, Interpretable, Explainable ML, ML: Other Foundations of Machine Learning

Abstract

Neural networks are powerful predictive models, but they provide little insight into the nature of relationships between predictors and outcomes. Although numerous methods have been proposed to quantify the relative contributions of input features, statistical inference and hypothesis testing of feature associations remain largely unexplored. We propose a permutation-based approach to testing that uses the partial derivatives of the network output with respect to specific inputs to assess both the significance of input features and whether significant features are linearly associated with the network output. These tests, which can be flexibly applied to a variety of network architectures, enhance the explanatory power of neural networks, and combined with powerful predictive capability, extend the applicability of these models.

Downloads

Published

2024-03-24

How to Cite

Mandel, F., & Barnett, I. (2024). Permutation-Based Hypothesis Testing for Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 14306-14314. https://doi.org/10.1609/aaai.v38i13.29343

Issue

Section

AAAI Technical Track on Machine Learning IV