Conjugated Discrete Distributions for Distributional Reinforcement Learning

Authors

  • Björn Lindenberg Linnaeus University
  • Jonas Nordqvist Linnaeus University
  • Karl-Olof Lindahl Linnaeus University

DOI:

https://doi.org/10.1609/aaai.v36i7.20716

Keywords:

Machine Learning (ML)

Abstract

In this work we continue to build upon recent advances in reinforcement learning for finite Markov processes. A common approach among previous existing algorithms, both single-actor and distributed, is to either clip rewards or to apply a transformation method on Q-functions to handle a large variety of magnitudes in real discounted returns. We theoretically show that one of the most successful methods may not yield an optimal policy if we have a non-deterministic process. As a solution, we argue that distributional reinforcement learning lends itself to remedy this situation completely. By the introduction of a conjugated distributional operator we may handle a large class of transformations for real returns with guaranteed theoretical convergence. We propose an approximating single-actor algorithm based on this operator that trains agents directly on unaltered rewards using a proper distributional metric given by the Cramér distance. To evaluate its performance in a stochastic setting we train agents on a suite of 55 Atari 2600 games using sticky-actions and obtain state-of-the-art performance compared to other well-known algorithms in the Dopamine framework.

Downloads

Published

2022-06-28

How to Cite

Lindenberg, B., Nordqvist, J., & Lindahl, K.-O. (2022). Conjugated Discrete Distributions for Distributional Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(7), 7516-7524. https://doi.org/10.1609/aaai.v36i7.20716

Issue

Section

AAAI Technical Track on Machine Learning II