HetSeq: Distributed GPU Training on Heterogeneous Infrastructure

Authors

  • Yifan Ding University of Notre Dame
  • Nicholas Botzer University of Notre Dame
  • Tim Weninger University of Notre Dame

DOI:

https://doi.org/10.1609/aaai.v35i17.17813

Keywords:

Heterogeneous Systems, Distributed AI, Machine Learning

Abstract

Modern deep learning systems like PyTorch and Tensorflow are able to train enormous models with billions (or trillions) of parameters on a distributed infrastructure. These systems require that the internal nodes have the same memory capacity and compute performance. Unfortunately, most organizations, especially universities, have a piecemeal approach to purchasing computer systems resulting in a heterogeneous infrastructure, which cannot be used to compute large models. The present work describes HetSeq, a software package adapted from the popular PyTorch package that provides the capability to train large neural network models on heterogeneous infrastructure. Experiments with language translation, text and image classification shows that HetSeq scales over heterogeneous systems. Additional information, support documents, source code are publicly available at https://github.com/yifding/hetseq.

Downloads

Published

2021-05-18

How to Cite

Ding, Y., Botzer, N., & Weninger, T. (2021). HetSeq: Distributed GPU Training on Heterogeneous Infrastructure. Proceedings of the AAAI Conference on Artificial Intelligence, 35(17), 15432-15438. https://doi.org/10.1609/aaai.v35i17.17813

Issue

Section

IAAI Technical Track on Innovative Tools for Enabling AI Application