Robust Action Gap Increasing with Clipped Advantage Learning
DOI:
https://doi.org/10.1609/aaai.v36i8.20900Keywords:
Machine Learning (ML)Abstract
Advantage Learning (AL) seeks to increase the action gap between the optimal action and its competitors, so as to improve the robustness to estimation errors. However, the method becomes problematic when the optimal action induced by the approximated value function does not agree with the true optimal action. In this paper, we present a novel method, named clipped Advantage Learning (clipped AL), to address this issue. The method is inspired by our observation that increasing the action gap blindly for all given samples while not taking their necessities into account could accumulate more errors in the performance loss bound, leading to a slow value convergence, and to avoid that, we should adjust the advantage value adaptively. We show that our simple clipped AL operator not only enjoys fast convergence guarantee but also retains proper action gaps, hence achieving a good balance between the large action gap and the fast convergence. The feasibility and effectiveness of the proposed method are verified empirically on several RL benchmarks with promising performance.Downloads
Published
2022-06-28
How to Cite
Zhang, Z., Gan, Y., & Tan, X. (2022). Robust Action Gap Increasing with Clipped Advantage Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 9145-9152. https://doi.org/10.1609/aaai.v36i8.20900
Issue
Section
AAAI Technical Track on Machine Learning III