Exploration by Maximizing Renyi Entropy for Reward-Free RL Framework

Authors

  • Chuheng Zhang IIIS, Tsinghua University
  • Yuanying Cai IIIS, Tsinghua University
  • Longbo Huang IIIS, Tsinghua University
  • Jian Li IIIS, Tsinghua University

DOI:

https://doi.org/10.1609/aaai.v35i12.17297

Keywords:

Reinforcement Learning

Abstract

Exploration is essential for reinforcement learning (RL). To face the challenges of exploration, we consider a reward-free RL framework that completely separates exploration from exploitation and brings new challenges for exploration algorithms. In the exploration phase, the agent learns an exploratory policy by interacting with a reward-free environment and collects a dataset of transitions by executing the policy. In the planning phase, the agent computes a good policy for any reward function based on the dataset without further interacting with the environment. This framework is suitable for the meta RL setting where there are many reward functions of interest. In the exploration phase, we propose to maximize the Renyi entropy over the state-action space and justify this objective theoretically. The success of using Renyi entropy as the objective results from its encouragement to explore the hard-to-reach state-actions. We further deduce a policy gradient formulation for this objective and design a practical exploration algorithm that can deal with complex environments. In the planning phase, we solve for good policies given arbitrary reward functions using a batch RL algorithm. Empirically, we show that our exploration algorithm is effective and sample efficient, and results in superior policies for arbitrary reward functions in the planning phase.

Downloads

Published

2021-05-18

How to Cite

Zhang, C., Cai, Y., Huang, L., & Li, J. (2021). Exploration by Maximizing Renyi Entropy for Reward-Free RL Framework. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10859-10867. https://doi.org/10.1609/aaai.v35i12.17297

Issue

Section

AAAI Technical Track on Machine Learning V