Backpropagation-Free Deep Learning with Recursive Local Representation Alignment


  • Alexander G. Ororbia Rochester Institute of Techonology
  • Ankur Mali University of South Florida
  • Daniel Kifer The Pennsylvania State University
  • C. Lee Giles The Pennsylvania State University



ML: Bio-Inspired Learning, ML: Optimization, ML: Deep Neural Network Algorithms


Training deep neural networks on large-scale datasets requires significant hardware resources whose costs (even on cloud platforms) put them out of reach of smaller organizations, groups, and individuals. Backpropagation (backprop), the workhorse for training these networks, is an inherently sequential process that is difficult to parallelize. Furthermore, researchers must continually develop various specialized techniques, such as particular weight initializations and enhanced activation functions, to ensure stable parameter optimization. Our goal is to seek an effective, neuro-biologically plausible alternative to backprop that can be used to train deep networks. In this paper, we propose a backprop-free procedure, recursive local representation alignment, for training large-scale architectures. Experiments with residual networks on CIFAR-10 and the large benchmark, ImageNet, show that our algorithm generalizes as well as backprop while converging sooner due to weight updates that are parallelizable and computationally less demanding. This is empirical evidence that a backprop-free algorithm can scale up to larger datasets.




How to Cite

Ororbia, A. G., Mali, A., Kifer, D., & Giles, C. L. (2023). Backpropagation-Free Deep Learning with Recursive Local Representation Alignment. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9327-9335.



AAAI Technical Track on Machine Learning III