TinyFoA: Memory Efficient Forward-Only Algorithm for On-Device Learning
DOI:
https://doi.org/10.1609/aaai.v39i16.33910Abstract
Forward-only algorithms offer a promising memory-efficient alternative to Backpropagation (BP) for on-device learning. However, state-of-the-art forward-only algorithms, e.g., Forward-Forward (FF), still require a substantial amount of memory during the training process, often exceeding the limits of mobile edge and Internet of Things (IoT) devices. At the same time, existing memory-optimization techniques, e.g., binarizing parameters and activations, are mainly designed for BP, hence significantly degrading the classification performance when applied to state-of-the-art forward-only algorithms. In this paper, we propose a memory-efficient forward-only algorithm called TinyFoA, to reduce dynamic memory overhead in the training process. Our TinyFoA optimizes the memory efficiency not only by layer-wise training but also by partially updating each layer, as well as by binarizing the weights and the activations. We extensively evaluate our proposed TinyFoA against BP and other forward-only algorithms and demonstrate its effectiveness and superiority compared to state-of-the-art forward-only algorithms in terms of classification performance and training memory overhead, reducing the memory overheads by an order of magnitude.Downloads
Published
2025-04-11
How to Cite
Huang, B., & Aminifar, A. (2025). TinyFoA: Memory Efficient Forward-Only Algorithm for On-Device Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 39(16), 17377–17385. https://doi.org/10.1609/aaai.v39i16.33910
Issue
Section
AAAI Technical Track on Machine Learning II