Distributed Stochastic Gradient Descent with Event-Triggered Communication

Authors

  • Jemin George CCDC Army Research Laboratory
  • Prudhvi Gurram Booz Allen Hamilton

DOI:

https://doi.org/10.1609/aaai.v34i05.6206

Abstract

We develop a Distributed Event-Triggered Stochastic GRAdient Descent (DETSGRAD) algorithm for solving non-convex optimization problems typically encountered in distributed deep learning. We propose a novel communication triggering mechanism that would allow the networked agents to update their model parameters aperiodically and provide sufficient conditions on the algorithm step-sizes that guarantee the asymptotic mean-square convergence. The algorithm is applied to a distributed supervised-learning problem, in which a set of networked agents collaboratively train their individual neural networks to perform image classification, while aperiodically sharing the model parameters with their one-hop neighbors. Results indicate that all agents report similar performance that is also comparable to the performance of a centrally trained neural network, while the event-triggered communication provides significant reduction in inter-agent communication. Results also show that the proposed algorithm allows the individual agents to classify the images even though the training data corresponding to all the classes are not locally available to each agent.

Downloads

Published

2020-04-03

How to Cite

George, J., & Gurram, P. (2020). Distributed Stochastic Gradient Descent with Event-Triggered Communication. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 7169-7178. https://doi.org/10.1609/aaai.v34i05.6206

Issue

Section

AAAI Technical Track: Multiagent Systems