Efficient Continuous Control with Double Actors and Regularized Critics
DOI:
https://doi.org/10.1609/aaai.v36i7.20732Keywords:
Machine Learning (ML), Planning, Routing, And Scheduling (PRS)Abstract
How to obtain good value estimation is a critical problem in Reinforcement Learning (RL). Current value estimation methods in continuous control, such as DDPG and TD3, suffer from unnecessary over- or under- estimation. In this paper, we explore the potential of double actors, which has been neglected for a long time, for better value estimation in the continuous setting. First, we interestingly find that double actors improve the exploration ability of the agent. Next, we uncover the bias alleviation property of double actors in handling overestimation with single critic, and underestimation with double critics respectively. Finally, to mitigate the potentially pessimistic value estimate in double critics, we propose to regularize the critics under double actors architecture. Together, we present Double Actors Regularized Critics (DARC) algorithm. Extensive experiments on challenging continuous control benchmarks, MuJoCo and PyBullet, show that DARC significantly outperforms current baselines with higher average return and better sample efficiency.Downloads
Published
2022-06-28
How to Cite
Lyu, J., Ma, X., Yan, J., & Li, X. (2022). Efficient Continuous Control with Double Actors and Regularized Critics. Proceedings of the AAAI Conference on Artificial Intelligence, 36(7), 7655-7663. https://doi.org/10.1609/aaai.v36i7.20732
Issue
Section
AAAI Technical Track on Machine Learning II