Transformer Layers as Painters

Authors

  • Qi Sun Sakana AI Institute of Science Tokyo
  • Marc Pickett Emergence AI
  • Aakash Kumar Nain Emergence AI
  • Llion Jones Sakana AI

DOI:

https://doi.org/10.1609/aaai.v39i24.34708

Abstract

Despite their nearly universal adoption for large language models, the internal workings of transformers are not well understood. We aim to better understand the impact of removing or reorganizing information throughout the layers of a pretrained transformer. Such an understanding could both yield better usage of existing models as well as to make architectural improvements to produce new variants. We present a series of empirical studies on frozen models that show that the lower and final layers of pretrained transformers differ from middle layers, but that middle layers have a surprising amount of uniformity. We further show that some classes of problems have robustness to skipping layers, running the layers in an order different from how they were trained, or running the layers in parallel. Our observations suggest that even frozen pretrained models may gracefully trade accuracy for latency by skipping layers or running layers in parallel.

Downloads

Published

2025-04-11

How to Cite

Sun, Q., Pickett, M., Nain, A. K., & Jones, L. (2025). Transformer Layers as Painters. Proceedings of the AAAI Conference on Artificial Intelligence, 39(24), 25219–25227. https://doi.org/10.1609/aaai.v39i24.34708

Issue

Section

AAAI Technical Track on Natural Language Processing III