Kickback Cuts Backprop's Red-Tape: Biologically Plausible Credit Assignment in Neural Networks

Authors

  • David Balduzzi Victoria University of Wellington
  • Hastagiri Vanchinathan ETH Zurich
  • Joachim Buhmann ETH Zurich

DOI:

https://doi.org/10.1609/aaai.v29i1.9217

Keywords:

neural networks, deep learning, error backpropagation, credit assignment, gradient descent

Abstract

Error backpropagation is an extremely effective algorithm for assigning credit in artificial neural networks. However, weight updates under Backprop depend on lengthy recursive computations and require separate output and error messages — features not shared by biological neurons, that are perhaps unnecessary. In this paper, we revisit Backprop and the credit assignment problem. We first decompose Backprop into a collection of interacting learning algorithms; provide regret bounds on the performance of these sub-algorithms; and factorize Backprop's error signals. Using these results, we derive a new credit assignment algorithm for nonparametric regression, Kickback, that is significantly simpler than Backprop. Finally, we provide a sufficient condition for Kickback to follow error gradients, and show that Kickback matches Backprop's performance on real-world regression benchmarks.

Downloads

Published

2015-02-10

How to Cite

Balduzzi, D., Vanchinathan, H., & Buhmann, J. (2015). Kickback Cuts Backprop’s Red-Tape: Biologically Plausible Credit Assignment in Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9217

Issue

Section

AAAI Technical Track: Cognitive Modeling