Cheaper and Faster: Distributed Deep Reinforcement Learning with Serverless Computing

Authors

  • Hanfei Yu Louisiana State University
  • Jian Li Stony Brook University
  • Yang Hua Queen's University Belfast
  • Xu Yuan University of Delaware
  • Hao Wang Louisiana State University

DOI:

https://doi.org/10.1609/aaai.v38i15.29592

Keywords:

ML: Scalability of ML Systems, ML: Reinforcement Learning

Abstract

Deep reinforcement learning (DRL) has gained immense success in many applications, including gaming AI, robotics, and system scheduling. Distributed algorithms and architectures have been vastly proposed (e.g., actor-learner architecture) to accelerate DRL training with large-scale server-based clusters. However, training on-policy algorithms with the actor-learner architecture unavoidably induces resource wasting due to synchronization between learners and actors, thus resulting in significantly extra billing. As a promising alternative, serverless computing naturally fits on-policy synchronization and alleviates resource wasting in distributed DRL training with pay-as-you-go pricing. Yet, none has leveraged serverless computing to facilitate DRL training. This paper proposes MinionsRL, the first serverless distributed DRL training framework that aims to accelerate DRL training- and cost-efficiency with dynamic actor scaling. We prototype MinionsRL on top of Microsoft Azure Container Instances and evaluate it with popular DRL tasks from OpenAI Gym. Extensive experiments show that MinionsRL reduces total training time by up to 52% and training cost by 86% compared to latest solutions.

Published

2024-03-24

How to Cite

Yu, H., Li, J., Hua, Y., Yuan, X., & Wang, H. (2024). Cheaper and Faster: Distributed Deep Reinforcement Learning with Serverless Computing. Proceedings of the AAAI Conference on Artificial Intelligence, 38(15), 16539-16547. https://doi.org/10.1609/aaai.v38i15.29592

Issue

Section

AAAI Technical Track on Machine Learning VI