Stabilizing Q Learning Via Soft Mellowmax Operator

Authors

  • Yaozhong Gan Nanjing University of Aeronautics and Astronautics, China
  • Zhe Zhang Nanjing University of Aeronautics and Astronautics, China
  • Xiaoyang Tan Nanjing University of Aeronautics and Astronautics, China

DOI:

https://doi.org/10.1609/aaai.v35i9.16919

Keywords:

Reinforcement Learning

Abstract

Learning complicated value functions in high dimensional state space by function approximation is a challenging task, partially due to that the max-operator used in temporal difference updates can theoretically cause instability for most linear or non-linear approximation schemes. Mellowmax is a recently proposed differentiable and non-expansion softmax operator that allows a convergent behavior in learning and planning. Unfortunately, the performance bound for the fixed point it converges to remains unclear, and in practice, its parameter is sensitive to various domains and has to be tuned case by case. Finally, the Mellowmax operator may suffer from oversmoothing as it ignores the probability being taken for each action when aggregating them. In this paper we address all the above issues with an enhanced Mellowmax operator, named SM2 (Soft Mellowmax). Particularly, the proposed operator is reliable, easy to implement, and has provable performance guarantee, while preserving all the advantages of Mellowmax. Furthermore, we show that our SM2 operator can be applied to the challenging multi-agent reinforcement learning scenarios, leading to stable value function approximation and state of the art performance.

Downloads

Published

2021-05-18

How to Cite

Gan, Y., Zhang, Z., & Tan, X. (2021). Stabilizing Q Learning Via Soft Mellowmax Operator. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 7501-7509. https://doi.org/10.1609/aaai.v35i9.16919

Issue

Section

AAAI Technical Track on Machine Learning II