HyDRA: Hypergradient Data Relevance Analysis for Interpreting Deep Neural Networks

Authors

  • Yuanyuan Chen Nanyang Technological University
  • Boyang Li Nanyang Technological University Alibaba-NTU Singapore Joint Research Institute
  • Han Yu Nanyang Technological University
  • Pengcheng Wu Nanyang Technological University
  • Chunyan Miao Nanyang Technological University

Keywords:

Ethics -- Bias, Fairness, Transparency & Privacy

Abstract

The behaviors of deep neural networks (DNNs) are notoriously resistant to human interpretations. In this paper, we propose Hypergradient Data Relevance Analysis, or HyDRA, which interprets the predictions made by DNNs as effects of their training data. Existing approaches generally estimate data contributions around the final model parameters and ignore how the training data shape the optimization trajectory. By unrolling the hypergradient of test loss w.r.t. the weights of training data, HyDRA assesses the contribution of training data toward test data points throughout the training trajectory. In order to accelerate computation, we remove the Hessian from the calculation and prove that, under moderate conditions, the approximation error is bounded. Corroborating this theoretical claim, empirical results indicate the error is indeed small. In addition, we quantitatively demonstrate that HyDRA outperforms influence functions in accurately estimating data contribution and detecting noisy data labels. The source code is available at https://github.com/cyyever/aaai_hydra.

Downloads

Published

2021-05-18

How to Cite

Chen, Y., Li, B., Yu, H., Wu, P., & Miao, C. (2021). HyDRA: Hypergradient Data Relevance Analysis for Interpreting Deep Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 7081-7089. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/16871

Issue

Section

AAAI Technical Track on Machine Learning I