Reinforcement Learning with Perturbed Rewards

Authors

  • Jingkang Wang University of Toronto
  • Yang Liu University of California, Santa Cruz
  • Bo Li University of Illinois, Urbana–Champaign

DOI:

https://doi.org/10.1609/aaai.v34i04.6086

Abstract

Recent studies have shown that reinforcement learning (RL) models are vulnerable in various noisy scenarios. For instance, the observed reward channel is often subject to noise in practice (e.g., when rewards are collected through sensors), and is therefore not credible. In addition, for applications such as robotics, a deep reinforcement learning (DRL) algorithm can be manipulated to produce arbitrary errors by receiving corrupted rewards. In this paper, we consider noisy RL problems with perturbed rewards, which can be approximated with a confusion matrix. We develop a robust RL framework that enables agents to learn in noisy environments where only perturbed rewards are observed. Our solution framework builds on existing RL/DRL algorithms and firstly addresses the biased noisy reward setting without any assumptions on the true distribution (e.g., zero-mean Gaussian noise as made in previous works). The core ideas of our solution include estimating a reward confusion matrix and defining a set of unbiased surrogate rewards. We prove the convergence and sample complexity of our approach. Extensive experiments on different DRL platforms show that trained policies based on our estimated surrogate reward can achieve higher expected rewards, and converge faster than existing baselines. For instance, the state-of-the-art PPO algorithm is able to obtain 84.6% and 80.8% improvements on average score for five Atari games, with error rates as 10% and 30% respectively.

Downloads

Published

2020-04-03

How to Cite

Wang, J., Liu, Y., & Li, B. (2020). Reinforcement Learning with Perturbed Rewards. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 6202-6209. https://doi.org/10.1609/aaai.v34i04.6086

Issue

Section

AAAI Technical Track: Machine Learning