Uncertainty-Aware Policy Optimization: A Robust, Adaptive Trust Region Approach

Authors

  • James Queeney Boston University
  • Ioannis Ch. Paschalidis Boston University
  • Christos G. Cassandras Boston University

DOI:

https://doi.org/10.1609/aaai.v35i11.17130

Keywords:

Reinforcement Learning

Abstract

In order for reinforcement learning techniques to be useful in real-world decision making processes, they must be able to produce robust performance from limited data. Deep policy optimization methods have achieved impressive results on complex tasks, but their real-world adoption remains limited because they often require significant amounts of data to succeed. When combined with small sample sizes, these methods can result in unstable learning due to their reliance on high-dimensional sample-based estimates. In this work, we develop techniques to control the uncertainty introduced by these estimates. We leverage these techniques to propose a deep policy optimization approach designed to produce stable performance even when data is scarce. The resulting algorithm, Uncertainty-Aware Trust Region Policy Optimization, generates robust policy updates that adapt to the level of uncertainty present throughout the learning process.

Downloads

Published

2021-05-18

How to Cite

Queeney, J., Paschalidis, I. C., & Cassandras, C. G. (2021). Uncertainty-Aware Policy Optimization: A Robust, Adaptive Trust Region Approach. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 9377-9385. https://doi.org/10.1609/aaai.v35i11.17130

Issue

Section

AAAI Technical Track on Machine Learning IV