Risk-Sensitive Exponential Actor Critic

Authors

  • Alonso Granados University of Arizona
  • Jason Pacheco University of Arizona

DOI:

https://doi.org/10.1609/aaai.v40i26.39280

Abstract

Model-free deep reinforcement learning (RL) algorithms have achieved tremendous success on a range of challenging tasks. However, safety concerns remain when these methods are deployed on real-world applications, necessitating risk-aware agents. A common utility for learning such risk-aware agents is the entropic risk measure, but current policy gradient methods optimizing this measure must perform high-variance and numerically unstable updates. As a result, existing risk-sensitive model-free approaches are limited to simple tasks and tabular settings. In this paper, we provide a comprehensive theoretical justification for policy gradient methods on the entropic risk measure, including on- and off-policy gradient theorems for the stochastic and deterministic policy settings. Motivated by theory, we propose risk-sensitive exponential actor-critic (rsEAC), an off-policy model-free approach that incorporates novel procedures to avoid the explicit representation of exponential value functions and their gradients, and optimizes its policy w.r.t. the entropic risk measure. In this way, we show that rsEAC produces more numerically stable updates compared to existing approaches and reliably learns risk-sensitive policies in challenging risky variants of continuous tasks in MuJoCo.

Downloads

Published

2026-03-14

How to Cite

Granados, A., & Pacheco, J. (2026). Risk-Sensitive Exponential Actor Critic. Proceedings of the AAAI Conference on Artificial Intelligence, 40(26), 21343–21351. https://doi.org/10.1609/aaai.v40i26.39280

Issue

Section

AAAI Technical Track on Machine Learning III