Symmetric Q-learning: Reducing Skewness of Bellman Error in Online Reinforcement Learning
DOI:
https://doi.org/10.1609/aaai.v38i13.29362Keywords:
ML: Reinforcement Learning, ROB: Behavior Learning & Control, ML: Deep Learning AlgorithmsAbstract
In deep reinforcement learning, estimating the value function to evaluate the quality of states and actions is essential. The value function is often trained using the least squares method, which implicitly assumes a Gaussian error distribution. However, a recent study suggested that the error distribution for training the value function is often skewed because of the properties of the Bellman operator, and violates the implicit assumption of normal error distribution in the least squares method. To address this, we proposed a method called Symmetric Q-learning, in which the synthetic noise generated from a zero-mean distribution is added to the target values to generate a Gaussian error distribution. We evaluated the proposed method on continuous control benchmark tasks in MuJoCo. It improved the sample efficiency of a state-of-the-art reinforcement learning method by reducing the skewness of the error distribution.Downloads
Published
2024-03-24
How to Cite
Omura, M., Osa, T., Mukuta, Y., & Harada, T. (2024). Symmetric Q-learning: Reducing Skewness of Bellman Error in Online Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(13), 14474–14481. https://doi.org/10.1609/aaai.v38i13.29362
Issue
Section
AAAI Technical Track on Machine Learning IV