Quiet Feature Learning in Algorithmic Tasks
DOI:
https://doi.org/10.1609/aaai.v40i44.41111Abstract
We train Transformer-based language models on ten foundational algorithmic tasks and observe pronounced phase transitions in their loss curves that deviate from established power-law scaling trends. Over large ranges of compute, the validation loss barely improves, then abruptly decreases. Probing the models’ internal representations reveals that quiet features are learned prior to any decrease in task loss. These quiet features represent intermediate algorithmic computations that do not by themselves improve the output loss. Ablation experiments demonstrate that individual quiet features are causally necessary for task performance. Our results demonstrate that substantial representational progress can remain hidden beneath an apparently flat loss curve, challenging the prevailing use of cross‑entropy as a proxy for learning and motivating richer diagnostics for monitoring model training.Downloads
Published
2026-03-14
How to Cite
Naidu, P., Wang, Z., Bergen, L., & Paturi, R. (2026). Quiet Feature Learning in Algorithmic Tasks. Proceedings of the AAAI Conference on Artificial Intelligence, 40(44), 37756–37764. https://doi.org/10.1609/aaai.v40i44.41111
Issue
Section
AAAI Special Track on AI Alignment