Rethinking Influence Functions of Neural Networks in the Over-Parameterized Regime

Authors

  • Rui Zhang NCMIS, CEMS, RCSDS, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing, China
  • Shihua Zhang NCMIS, CEMS, RCSDS, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing, China School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing, China

DOI:

https://doi.org/10.1609/aaai.v36i8.20893

Keywords:

Machine Learning (ML)

Abstract

Understanding the black-box prediction for neural networks is challenging. To achieve this, early studies have designed influence function (IF) to measure the effect of removing a single training point on neural networks. However, the classic implicit Hessian-vector product (IHVP) method for calculating IF is fragile, and theoretical analysis of IF in the context of neural networks is still lacking. To this end, we utilize the neural tangent kernel (NTK) theory to calculate IF for the neural network trained with regularized mean-square loss, and prove that the approximation error can be arbitrarily small when the width is sufficiently large for two-layer ReLU networks. We analyze the error bound for the classic IHVP method in the over-parameterized regime to understand when and why it fails or not. In detail, our theoretical analysis reveals that (1) the accuracy of IHVP depends on the regularization term, and is pretty low under weak regularization; (2) the accuracy of IHVP has a significant correlation with the probability density of corresponding training points. We further borrow the theory from NTK to understand the IFs better, including quantifying the complexity for influential samples and depicting the variation of IFs during the training dynamics. Numerical experiments on real-world data confirm our theoretical results and demonstrate our findings.

Downloads

Published

2022-06-28

How to Cite

Zhang, R., & Zhang, S. (2022). Rethinking Influence Functions of Neural Networks in the Over-Parameterized Regime. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 9082-9090. https://doi.org/10.1609/aaai.v36i8.20893

Issue

Section

AAAI Technical Track on Machine Learning III