TY - JOUR AU - Neider, Daniel AU - Gaglione, Jean-Raphael AU - Gavran, Ivan AU - Topcu, Ufuk AU - Wu, Bo AU - Xu, Zhe PY - 2021/05/18 Y2 - 2024/03/29 TI - Advice-Guided Reinforcement Learning in a non-Markovian Environment JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 35 IS - 10 SE - AAAI Technical Track on Machine Learning III DO - 10.1609/aaai.v35i10.17096 UR - https://ojs.aaai.org/index.php/AAAI/article/view/17096 SP - 9073-9080 AB - We study a class of reinforcement learning tasks in which the agent receives its reward for complex, temporally-extended behaviors sparsely. For such tasks, the problem is how to augment the state-space so as to make the reward function Markovian in an efficient way. While some existing solutions assume that the reward function is explicitly provided to the learning algorithm (e.g., in the form of a reward machine), the others learn the reward function from the interactions with the environment, assuming no prior knowledge provided by the user. In this paper, we generalize both approaches and enable the user to give advice to the agent, representing the user’s best knowledge about the reward function, potentially fragmented, partial, or even incorrect. We formalize advice as a set of DFAs and present a reinforcement learning algorithm that takes advantage of such advice, with optimal con- vergence guarantee. The experiments show that using well- chosen advice can reduce the number of training steps needed for convergence to optimal policy, and can decrease the computation time to learn the reward function by up to two orders of magnitude. ER -