Model-Based Reinforcement Learning in Continuous Environments Using Real-Time Constrained Optimization


  • Olov Andersson Linköping University
  • Fredrik Heintz Linköping University
  • Patrick Doherty Linköping University



Reinforcement Learning,Gaussian Processes,Optimization,Robotics


Reinforcement learning for robot control tasks in continuous environments is a challenging problem due to the dimensionality of the state and action spaces, time and resource costs for learning with a real robot as well as constraints imposed for its safe operation. In this paper we propose a model-based reinforcement learning approach for continuous environments with constraints. The approach combines model-based reinforcement learning with recent advances in approximate optimal control. This results in a bounded-rationality agent that makes decisions in real-time by efficiently solving a sequence of constrained optimization problems on learned sparse Gaussian process models. Such a combination has several advantages. No high-dimensional policy needs to be computed or stored while the learning problem often reduces to a set of lower-dimensional models of the dynamics. In addition, hard constraints can easily be included and objectives can also be changed in real-time to allow for multiple or dynamic tasks. The efficacy of the approach is demonstrated on both an extended cart pole domain and a challenging quadcopter navigation task using real data.




How to Cite

Andersson, O., Heintz, F., & Doherty, P. (2015). Model-Based Reinforcement Learning in Continuous Environments Using Real-Time Constrained Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1).



Main Track: Novel Machine Learning Algorithms