Controlling Underestimation Bias in Reinforcement Learning via Quasi-median Operation

Authors

  • Wei Wei Shanxi University
  • Yujia Zhang Shanxi University
  • Jiye Liang Shanxi University
  • Lin Li Shanxi University
  • Yyuze Li Shanxi Unveristy

DOI:

https://doi.org/10.1609/aaai.v36i8.20840

Keywords:

Machine Learning (ML)

Abstract

How to get a good value estimation is one of the key problems in reinforcement learning (RL). Current off-policy methods, such as Maxmin Q-learning, TD3 and TADD, suffer from the underestimation problem when solving the overestimation problem. In this paper, we propose the Quasi-Median Operation, a novel way to mitigate the underestimation bias by selecting the quasi-median from multiple state-action values. Based on the quasi-median operation, we propose Quasi-Median Q-learning (QMQ) for the discrete action tasks and Quasi-Median Delayed Deep Deterministic Policy Gradient (QMD3) for the continuous action tasks. Theoretically, the underestimation bias of our method is improved while the estimation variance is significantly reduced compared to Maxmin Q-learning, TD3 and TADD. We conduct extensive experiments on the discrete and continuous action tasks, and results show that our method outperforms the state-of-the-art methods.

Downloads

Published

2022-06-28

How to Cite

Wei, W., Zhang, Y., Liang, J., Li, L., & Li, Y. (2022). Controlling Underestimation Bias in Reinforcement Learning via Quasi-median Operation. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 8621-8628. https://doi.org/10.1609/aaai.v36i8.20840

Issue

Section

AAAI Technical Track on Machine Learning III