CTD4 – a Deep Continuous Distributional Actor-Critic Agent with a Kalman Fusion of Multiple Critics

Authors

  • David Valencia University of Auckland
  • Henry Williams University of Auckland
  • Yuning Xing University of Auckland
  • Trevor Gee University of Auckland
  • Bruce A MacDonald University of Auckland
  • Minas Liarokapis New Dexterity Lab

DOI:

https://doi.org/10.1609/aaai.v39i20.35391

Abstract

Categorical Distributional Reinforcement Learning (CDRL) has demonstrated superior sample efficiency in learning complex tasks compared to conventional Reinforcement Learning (RL) approaches. However, the practical application of CDRL is encumbered by challenging projection steps, detailed parameter tuning, and domain knowledge. This paper addresses these challenges by introducing a pioneering Continuous Distributional Model-Free RL algorithm tailored for continuous action spaces. The proposed algorithm simplifies the implementation of distributional RL, adopting an actor-critic architecture wherein the critic outputs a continuous probability distribution. Additionally, we propose an ensemble of multiple critics fused through a Kalman fusion mechanism to mitigate overestimation bias. Through a series of experiments, we validate that our proposed method provides a sample-efficient solution for executing complex continuous-control tasks.

Downloads

Published

2025-04-11

How to Cite

Valencia, D., Williams, H., Xing, Y., Gee, T., MacDonald, B. A., & Liarokapis, M. (2025). CTD4 – a Deep Continuous Distributional Actor-Critic Agent with a Kalman Fusion of Multiple Critics. Proceedings of the AAAI Conference on Artificial Intelligence, 39(20), 20956–20963. https://doi.org/10.1609/aaai.v39i20.35391

Issue

Section

AAAI Technical Track on Machine Learning VI