Deterministic Policy Gradient Primal-Dual Methods for Continuous-Space Constrained MDPs

Authors

  • Sergio Rozada King Juan Carlos University
  • Dongsheng Ding University of Pennsylvania
  • Antonio G. Marques King Juan Carlos University
  • Alejandro Ribeiro University of Pennsylvania

DOI:

https://doi.org/10.1609/aaai.v39i19.34225

Abstract

We study the problem of computing deterministic optimal policies for constrained Markov decision processes (MDPs) with continuous state and action spaces, which are widely encountered in constrained dynamical systems. Designing deterministic policy gradient methods in continuous state and action spaces is particularly challenging due to the lack of enumerable state-action pairs and the adoption of deterministic policies, hindering the application of existing policy gradient methods for constrained MDPs. To this end, we develop a deterministic policy gradient primal-dual method to find an optimal deterministic policy with non-asymptotic convergence. Specifically, we leverage regularization of the Lagrangian of the constrained MDP to propose a deterministic policy gradient primal-dual (D-PGPD) algorithm that updates the deterministic policy via a quadratic-regularized gradient ascent step and the dual variable via a quadratic-regularized gradient descent step. We prove that the primal-dual iterates of D-PGPD converge at a sub-linear rate to an optimal regularized primal-dual pair. We instantiate D-PGPD with function approximation and prove that the primal-dual iterates of D-PGPD converge at a sub-linear rate to an optimal regularized primal-dual pair, up to a function approximation error. Furthermore, we demonstrate the effectiveness of our method in two continuous control problems: robot navigation and fluid control. To the best of our knowledge, this appears to be the first work that proposes a deterministic policy search method for continuous-space constrained MDPs.

Downloads

Published

2025-04-11

How to Cite

Rozada, S., Ding, D., Marques, A. G., & Ribeiro, A. (2025). Deterministic Policy Gradient Primal-Dual Methods for Continuous-Space Constrained MDPs. Proceedings of the AAAI Conference on Artificial Intelligence, 39(19), 20200–20208. https://doi.org/10.1609/aaai.v39i19.34225

Issue

Section

AAAI Technical Track on Machine Learning V