Expressing Arbitrary Reward Functions as Potential-Based Advice

Authors

  • Anna Harutyunyan Vrije Universiteit Brussel
  • Sam Devlin University of York
  • Peter Vrancx Vrije Universiteit Brussel
  • Ann Nowe Vrije Universiteit Brussel

DOI:

https://doi.org/10.1609/aaai.v29i1.9628

Abstract

Effectively incorporating external advice is an important problem in reinforcement learning, especially as it moves into the real world. Potential-based reward shaping is a way to provide the agent with a specific form of additional reward, with the guarantee of policy invariance. In this work we give a novel way to incorporate an arbitrary reward function with the same guarantee, by implicitly translating it into the specific form of dynamic advice potentials, which are maintained as an auxiliary value function learnt at the same time. We show that advice provided in this way captures the input reward function in expectation, and demonstrate its efficacy empirically.

Downloads

Published

2015-02-21

How to Cite

Harutyunyan, A., Devlin, S., Vrancx, P., & Nowe, A. (2015). Expressing Arbitrary Reward Functions as Potential-Based Advice. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9628

Issue

Section

Main Track: Novel Machine Learning Algorithms