Scalable Multitask Policy Gradient Reinforcement Learning

Authors

  • Salam El Bsat Rafik Hariri University
  • Haitham Bou Ammar American University of Beirut
  • Matthew Taylor Washington State University

DOI:

https://doi.org/10.1609/aaai.v31i1.10942

Keywords:

Transfer Learning, Multi-Task Learning, Reinforcement Learning, Scalable MTL

Abstract

Policy search reinforcement learning (RL) allows agents to learn autonomously with limited feedback. However, such methods typically require extensive experience for successful behavior due to their tabula rasa nature. Multitask RL is an approach, which aims to reduce data requirements by allowing knowledge transfer between tasks. Although successful, current multitask learning methods suffer from scalability issues when considering large number of tasks. The main reasons behind this limitation is the reliance on centralized solutions. This paper proposes to a novel distributed multitask RL framework, improving the scalability across many different types of tasks. Our framework maps multitask RL to an instance of general consensus and develops an efficient decentralized solver. We justify the correctness of the algorithm both theoretically and empirically: we first proof an improvement of convergence speed to an order of O(1/k) with k being the number of iterations, and then show our algorithm surpassing others on multiple dynamical system benchmarks.

Downloads

Published

2017-02-13

How to Cite

El Bsat, S., Bou Ammar, H., & Taylor, M. (2017). Scalable Multitask Policy Gradient Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10942