Toward Deep Reinforcement Learning Without a Simulator: An Autonomous Steering Example

Authors

  • Bar Hilleli Technion
  • Ran El-Yaniv Technion

Keywords:

Autonomous Driving, Deep Learning, Reinforcement Learning, Supervised Learning

Abstract

We propose a scheme for training a computerized agent to perform complex human tasks such as highway steering. The scheme is designed to follow a natural learning process whereby a human instructor teaches a computerized trainee. It enables leveraging the weak supervision abilities of a (human) instructor, who, while unable to perform well herself at the required task, can provide coherent and learnable instantaneous reward signals to the computerized trainee. The learning process consists of three supervised elements followed by reinforcement learning. The supervised learning stages are: (i) supervised imitation learning; (ii) supervised reward induction; and (iii) supervised safety module construction. We implemented this scheme using deep convolutional networks and applied it to successfully create a computerized agent capable of autonomous highway steering over the well-known racing game Assetto Corsa. We demonstrate that the use of all components is essential to effectively carry out reinforcement learning of the steering task using vision alone, without access to a driving simulator internals, and operating in wall-clock time.

Downloads

Published

2018-04-25

How to Cite

Hilleli, B., & El-Yaniv, R. (2018). Toward Deep Reinforcement Learning Without a Simulator: An Autonomous Steering Example. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/11490

Issue

Section

AAAI Technical Track: Human-AI Collaboration