Experimental Observations of the Topology of Convolutional Neural Network Activations

Authors

  • Emilie Purvine Pacific Northwest National Laboratory
  • Davis Brown Pacific Northwest National Laboratory
  • Brett Jefferson Pacific Northwest National Laboratory
  • Cliff Joslyn Pacific Northwest National Laboratory
  • Brenda Praggastis Pacific Northwest National Laboratory
  • Archit Rathore Scientific Computing and Imaging (SCI) Institute and School of Computing, University of Utah
  • Madelyn Shapiro Pacific Northwest National Laboratory
  • Bei Wang Scientific Computing and Imaging (SCI) Institute and School of Computing, University of Utah
  • Youjia Zhou Scientific Computing and Imaging (SCI) Institute and School of Computing, University of Utah

DOI:

https://doi.org/10.1609/aaai.v37i8.26134

Keywords:

ML: Transparent, Interpretable, Explainable ML, KRR: Other Foundations of Knowledge Representation & Reasoning, CV: Interpretability and Transparency, ML: Other Foundations of Machine Learning, CV: Other Foundations of Computer Vision, DMKM: Data Visualization & Summarization, ML: Evaluation and Analysis (Machine Learning), ML: Clustering, ML: Deep Neural Network Algorithms

Abstract

Topological data analysis (TDA) is a branch of computational mathematics, bridging algebraic topology and data science, that provides compact, noise-robust representations of complex structures. Deep neural networks (DNNs) learn millions of parameters associated with a series of transformations defined by the model architecture resulting in high-dimensional, difficult to interpret internal representations of input data. As DNNs become more ubiquitous across multiple sectors of our society, there is increasing recognition that mathematical methods are needed to aid analysts, researchers, and practitioners in understanding and interpreting how these models' internal representations relate to the final classification. In this paper we apply cutting edge techniques from TDA with the goal of gaining insight towards interpretability of convolutional neural networks used for image classification. We use two common TDA approaches to explore several methods for modeling hidden layer activations as high-dimensional point clouds, and provide experimental evidence that these point clouds capture valuable structural information about the model's process. First, we demonstrate that a distance metric based on persistent homology can be used to quantify meaningful differences between layers and discuss these distances in the broader context of existing representational similarity metrics for neural network interpretability. Second, we show that a mapper graph can provide semantic insight as to how these models organize hierarchical class knowledge at each layer. These observations demonstrate that TDA is a useful tool to help deep learning practitioners unlock the hidden structures of their models.

Downloads

Published

2023-06-26

How to Cite

Purvine, E., Brown, D., Jefferson, B., Joslyn, C., Praggastis, B., Rathore, A., Shapiro, M., Wang, B., & Zhou, Y. (2023). Experimental Observations of the Topology of Convolutional Neural Network Activations. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9470-9479. https://doi.org/10.1609/aaai.v37i8.26134

Issue

Section

AAAI Technical Track on Machine Learning III