Task-Agnostic Exploration via Policy Gradient of a Non-Parametric State Entropy Estimate

Authors

  • Mirco Mutti Politecnico di Milano Università di Bologna
  • Lorenzo Pratissoli Politecnico di Milano
  • Marcello Restelli Politecnico di Milano

DOI:

https://doi.org/10.1609/aaai.v35i10.17091

Keywords:

Reinforcement Learning

Abstract

In a reward-free environment, what is a suitable intrinsic objective for an agent to pursue so that it can learn an optimal task-agnostic exploration policy? In this paper, we argue that the entropy of the state distribution induced by finite-horizon trajectories is a sensible target. Especially, we present a novel and practical policy-search algorithm, Maximum Entropy POLicy optimization (MEPOL), to learn a policy that maximizes a non-parametric, $k$-nearest neighbors estimate of the state distribution entropy. In contrast to known methods, MEPOL is completely model-free as it requires neither to estimate the state distribution of any policy nor to model transition dynamics. Then, we empirically show that MEPOL allows learning a maximum-entropy exploration policy in high-dimensional, continuous-control domains, and how this policy facilitates learning meaningful reward-based tasks downstream.

Downloads

Published

2021-05-18

How to Cite

Mutti, M., Pratissoli, L., & Restelli, M. (2021). Task-Agnostic Exploration via Policy Gradient of a Non-Parametric State Entropy Estimate. Proceedings of the AAAI Conference on Artificial Intelligence, 35(10), 9028-9036. https://doi.org/10.1609/aaai.v35i10.17091

Issue

Section

AAAI Technical Track on Machine Learning III