An Empirical Study of Distributed Deep Learning Training on Edge (Student Abstract)

Authors

  • Christine Mwase Fudan University
  • Albert Njoroge Kahira Julich Supercomputing Center
  • Zhuo Zou Fudan University

DOI:

https://doi.org/10.1609/aaai.v38i21.30485

Keywords:

AI Architectures, Deep Learning, Optimization

Abstract

Deep learning (DL), despite its success in various fields, remains expensive and inaccessible to many due to its need for powerful supercomputing and high-end GPUs. This study explores alternative computing infrastructure and methods for distributed DL on low-energy, low-cost devices. We experiment on Raspberry Pi 4 devices with ARM Cortex-A72 processors and train a ResNet-18 model on the CIFAR-10 dataset. Our findings reveal limitations and opportunities for future optimizations, paving the way for a DL toolset for low-energy edge devices.

Published

2024-03-24

How to Cite

Mwase, C., Kahira, A. N., & Zou, Z. (2024). An Empirical Study of Distributed Deep Learning Training on Edge (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23590–23591. https://doi.org/10.1609/aaai.v38i21.30485