Neural Representations Reveal Distinct Modes of Class Fitting in Residual Convolutional Networks

Authors

  • Michał Jamroż AGH University of Science and Technology, Krakow, Poland
  • Marcin Kurdziel AGH University of Science and Technology, Krakow, Poland

DOI:

https://doi.org/10.1609/aaai.v37i7.25966

Keywords:

ML: Representation Learning, ML: Adversarial Learning & Robustness, ML: Deep Neural Architectures, ML: Deep Neural Network Algorithms, ML: Probabilistic Methods

Abstract

We leverage probabilistic models of neural representations to investigate how residual networks fit classes. To this end, we estimate class-conditional density models for representations learned by deep ResNets. We then use these models to characterize distributions of representations across learned classes. Surprisingly, we find that classes in the investigated models are not fitted in a uniform way. On the contrary: we uncover two groups of classes that are fitted with markedly different distributions of representations. These distinct modes of class-fitting are evident only in the deeper layers of the investigated models, indicating that they are not related to low-level image features. We show that the uncovered structure in neural representations correlate with memorization of training examples and adversarial robustness. Finally, we compare class-conditional distributions of neural representations between memorized and typical examples. This allows us to uncover where in the network structure class labels arise for memorized and standard inputs.

Downloads

Published

2023-06-26

How to Cite

Jamroż, M., & Kurdziel, M. (2023). Neural Representations Reveal Distinct Modes of Class Fitting in Residual Convolutional Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 7988-7995. https://doi.org/10.1609/aaai.v37i7.25966

Issue

Section

AAAI Technical Track on Machine Learning II