TY - JOUR AU - Jain, Arushi AU - Patil, Gandharv AU - Jain, Ayush AU - Khetarpal, Khimya AU - Precup, Doina PY - 2021/05/18 Y2 - 2024/03/28 TI - Variance Penalized On-Policy and Off-Policy Actor-Critic JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 35 IS - 9 SE - AAAI Technical Track on Machine Learning II DO - 10.1609/aaai.v35i9.16964 UR - https://ojs.aaai.org/index.php/AAAI/article/view/16964 SP - 7899-7907 AB - Reinforcement learning algorithms are typically geared towards optimizing the expected return of an agent. However, in many practical applications, low variance in the return is desired to ensure the reliability of an algorithm. In this paper, we propose on-policy and off-policy actor-critic algorithms that optimize a performance criterion involving both mean and variance in the return. Previous work uses the second moment of return to estimate the variance indirectly. Instead, we use a much simpler recently proposed direct variance estimator which updates the estimates incrementally using temporal difference methods. Using the variance-penalized criterion, we guarantee the convergence of our algorithm to locally optimal policies for finite state action Markov decision processes. We demonstrate the utility of our algorithm in tabular and continuous MuJoCo domains. Our approach not only performs on par with actor-critic and prior variance-penalization baselines in terms of expected return, but also generates trajectories which have lower variance in the return. ER -