Beyond Expected Return: Accounting for Policy Reproducibility When Evaluating Reinforcement Learning Algorithms

Authors

  • Manon Flageat Imperial College London
  • Bryan Lim Imperial College London
  • Antoine Cully Imperial College London

DOI:

https://doi.org/10.1609/aaai.v38i11.29090

Keywords:

ML: Reinforcement Learning, ML: Evaluation and Analysis

Abstract

Many applications in Reinforcement Learning (RL) usually have noise or stochasticity present in the environment. Beyond their impact on learning, these uncertainties lead the exact same policy to perform differently, i.e. yield different return, from one roll-out to another. Common evaluation procedures in RL summarise the consequent return distributions using solely the expected return, which does not account for the spread of the distribution. Our work defines this spread as the policy reproducibility: the ability of a policy to obtain similar performance when rolled out many times, a crucial property in some real-world applications. We highlight that existing procedures that only use the expected return are limited on two fronts: first an infinite number of return distributions with a wide range of performance-reproducibility trade-offs can have the same expected return, limiting its effectiveness when used for comparing policies; second, the expected return metric does not leave any room for practitioners to choose the best trade-off value for considered applications. In this work, we address these limitations by recommending the use of Lower Confidence Bound, a metric taken from Bayesian optimisation that provides the user with a preference parameter to choose a desired performance-reproducibility trade-off. We also formalise and quantify policy reproducibility, and demonstrate the benefit of our metrics using extensive experiments of popular RL algorithms on common uncertain RL tasks.

Published

2024-03-24

How to Cite

Flageat, M., Lim, B., & Cully, A. (2024). Beyond Expected Return: Accounting for Policy Reproducibility When Evaluating Reinforcement Learning Algorithms. Proceedings of the AAAI Conference on Artificial Intelligence, 38(11), 12024-12032. https://doi.org/10.1609/aaai.v38i11.29090

Issue

Section

AAAI Technical Track on Machine Learning II