A Flexible Framework for Communication-Efficient Machine Learning

Authors

  • Sarit Khirirat KTH Royal Institute of Technology
  • Sindri Magnússon Stockholm University
  • Arda Aytekin Ericsson AB
  • Mikael Johansson KTH Royal Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v35i9.16987

Keywords:

Learning on the Edge & Model Compression

Abstract

With the increasing scale of machine learning tasks, it has become essential to reduce the communication between computing nodes. Early work on gradient compression focused on the bottleneck between CPUs and GPUs, but communication-efficiency is now needed in a variety of different system architectures, from high-performance clusters to energy-constrained IoT devices. In the current practice, compression levels are typically chosen before training and settings that work well for one task may be vastly suboptimal for another dataset on another architecture. In this paper, we propose a flexible framework which adapts the compression level to the true gradient at each iteration, maximizing the improvement in the objective function that is achieved per communicated bit. Our framework is easy to adapt from one technology to the next by modeling how the communication cost depends on the compression level for the specific technology. Theoretical results and practical experiments indicate that the automatic tuning strategies significantly increase communication efficiency on several state-of-the-art compression schemes.

Downloads

Published

2021-05-18

How to Cite

Khirirat, S., Magnússon, S., Aytekin, A., & Johansson, M. (2021). A Flexible Framework for Communication-Efficient Machine Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 8101-8109. https://doi.org/10.1609/aaai.v35i9.16987

Issue

Section

AAAI Technical Track on Machine Learning II