Smoothing Advantage Learning

Authors

  • Yaozhong Gan Nanjing University of Aeronautics and Astronautics, China MIIT Key Laboratory of Pattern Analysis and Machine Intelligence
  • Zhe Zhang Nanjing University of Aeronautics and Astronautics, China MIIT Key Laboratory of Pattern Analysis and Machine Intelligence
  • Xiaoyang Tan Nanjing University of Aeronautics and Astronautics, China MIIT Key Laboratory of Pattern Analysis and Machine Intelligence

DOI:

https://doi.org/10.1609/aaai.v36i6.20620

Keywords:

Machine Learning (ML)

Abstract

Advantage learning (AL) aims to improve the robustness of value-based reinforcement learning against estimation errors with action-gap-based regularization. Unfortunately, the method tends to be unstable in the case of function approximation. In this paper, we propose a simple variant of AL, named smoothing advantage learning (SAL), to alleviate this problem. The key to our method is to replace the original Bellman Optimal operator in AL with a smooth one so as to obtain more reliable estimation of the temporal difference target. We give a detailed account of the resulting action gap and the performance bound for approximate SAL. Further theoretical analysis reveals that the proposed value smoothing technique not only helps to stabilize the training procedure of AL by controlling the trade-off between convergence rate and the upper bound of the approximation errors, but is beneficial to increase the action gap between the optimal and sub-optimal action value as well.

Downloads

Published

2022-06-28

How to Cite

Gan, Y., Zhang, Z., & Tan, X. (2022). Smoothing Advantage Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(6), 6657-6664. https://doi.org/10.1609/aaai.v36i6.20620

Issue

Section

AAAI Technical Track on Machine Learning I