An Empirical Study of Distributed Deep Learning Training on Edge (Student Abstract)
DOI:
https://doi.org/10.1609/aaai.v38i21.30485Keywords:
AI Architectures, Deep Learning, OptimizationAbstract
Deep learning (DL), despite its success in various fields, remains expensive and inaccessible to many due to its need for powerful supercomputing and high-end GPUs. This study explores alternative computing infrastructure and methods for distributed DL on low-energy, low-cost devices. We experiment on Raspberry Pi 4 devices with ARM Cortex-A72 processors and train a ResNet-18 model on the CIFAR-10 dataset. Our findings reveal limitations and opportunities for future optimizations, paving the way for a DL toolset for low-energy edge devices.Downloads
Published
2024-03-24
How to Cite
Mwase, C., Kahira, A. N., & Zou, Z. (2024). An Empirical Study of Distributed Deep Learning Training on Edge (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23590–23591. https://doi.org/10.1609/aaai.v38i21.30485
Issue
Section
AAAI Student Abstract and Poster Program