Uncertainty-Aware Policy Optimization: A Robust, Adaptive Trust Region Approach
AbstractIn order for reinforcement learning techniques to be useful in real-world decision making processes, they must be able to produce robust performance from limited data. Deep policy optimization methods have achieved impressive results on complex tasks, but their real-world adoption remains limited because they often require significant amounts of data to succeed. When combined with small sample sizes, these methods can result in unstable learning due to their reliance on high-dimensional sample-based estimates. In this work, we develop techniques to control the uncertainty introduced by these estimates. We leverage these techniques to propose a deep policy optimization approach designed to produce stable performance even when data is scarce. The resulting algorithm, Uncertainty-Aware Trust Region Policy Optimization, generates robust policy updates that adapt to the level of uncertainty present throughout the learning process.
How to Cite
Queeney, J., Paschalidis, I. C., & Cassandras, C. G. (2021). Uncertainty-Aware Policy Optimization: A Robust, Adaptive Trust Region Approach. Proceedings of the AAAI Conference on Artificial Intelligence, 35(11), 9377-9385. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17130
AAAI Technical Track on Machine Learning IV