Frivolous Units: Wider Networks Are Not Really That Wide

Authors

  • Stephen Casper Boston Children's Hospital, Harvard Medical School, USA Center for Brains, Minds, and Machines (CBMM)
  • Xavier Boix Boston Children's Hospital, Harvard Medical School Center for Brains, Minds, and Machines (CBMM) Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, USA
  • Vanessa D'Amario Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, USA
  • Ling Guo Neuroscience Graduate Program, University of California San Francisco, USA
  • Martin Schrimpf Center for Brains, Minds, and Machines (CBMM) Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, USA
  • Kasper Vinken Boston Children's Hospital, Harvard Medical School, USA Center for Brains, Minds, and Machines (CBMM)
  • Gabriel Kreiman Boston Children's Hospital, Harvard Medical School, USA Center for Brains, Minds, and Machines (CBMM)

DOI:

https://doi.org/10.1609/aaai.v35i8.16853

Keywords:

Representation Learning, Bio-inspired Learning, Learning on the Edge & Model Compression

Abstract

A remarkable characteristic of overparameterized deep neural networks (DNNs) is that their accuracy does not degrade when the network width is increased. Recent evidence suggests that developing compressible representations allows the complexity of large networks to be adjusted for the learning task at hand. However, these representations are poorly understood. A promising strand of research inspired from biology involves studying representations at the unit level as it offers a more granular interpretation of the neural mechanisms. In order to better understand what facilitates increases in width without decreases in accuracy, we ask: Are there mechanisms at the unit level by which networks control their effective complexity? If so, how do these depend on the architecture, dataset, and hyperparameters? We identify two distinct types of “frivolous” units that proliferate when the network’s width increases: prunable units which can be dropped out of the network without significant change to the output and redundant units whose activities can be expressed as a linear combination of others. These units imply complexity constraints as the function the network computes could be expressed without them. We also identify how the development of these units can be influenced by architecture and a number of training factors. Together, these results help to explain why the accuracy of DNNs does not degrade when width is increased and highlight the importance of frivolous units toward understanding implicit regularization in DNNs.

Downloads

Published

2021-05-18

How to Cite

Casper, S., Boix, X., D’Amario, V., Guo, L., Schrimpf, M., Vinken, K., & Kreiman, G. (2021). Frivolous Units: Wider Networks Are Not Really That Wide. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 6921-6929. https://doi.org/10.1609/aaai.v35i8.16853

Issue

Section

AAAI Technical Track on Machine Learning I