Efficient Distributed Inference of Deep Neural Networks via Restructuring and Pruning

Authors

  • Afshin Abdi Georgia Institute of Technology
  • Saeed Rashidi Georgia Institute of Technology
  • Faramarz Fekri Georgia Institute of Technology
  • Tushar Krishna Georgia Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v37i6.25815

Keywords:

ML: Learning on the Edge & Model Compression, ML: Distributed Machine Learning & Federated Learning

Abstract

In this paper, we consider the parallel implementation of an already-trained deep model on multiple processing nodes (a.k.a. workers). Specifically, we investigate as to how a deep model should be divided into several parallel sub-models, each of which is executed efficiently by a worker. Since latency due to synchronization and data transfer among workers negatively impacts the performance of the parallel implementation, it is desirable to have minimum interdependency among parallel sub-models. To achieve this goal, we propose to rearrange the neurons in the neural network, partition them (without changing the general topology of the neural network), and modify the weights such that the interdependency among sub-models is minimized under the computations and communications constraints of the workers while minimizing its impact on the performance of the model. We propose RePurpose, a layer-wise model restructuring and pruning technique that guarantees the performance of the overall parallelized model. To efficiently apply RePurpose, we propose an approach based on L0 optimization and the Munkres assignment algorithm. We show that, compared to the existing methods, RePurpose significantly improves the efficiency of the distributed inference via parallel implementation, both in terms of communication and computational complexity.

Downloads

Published

2023-06-26

How to Cite

Abdi, A., Rashidi, S., Fekri, F., & Krishna, T. (2023). Efficient Distributed Inference of Deep Neural Networks via Restructuring and Pruning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(6), 6640-6648. https://doi.org/10.1609/aaai.v37i6.25815

Issue

Section

AAAI Technical Track on Machine Learning I