Relative Variational Intrinsic Control

Authors

  • Kate Baumli DeepMind
  • David Warde-Farley DeepMind
  • Steven Hansen DeepMind
  • Volodymyr Mnih DeepMind

DOI:

https://doi.org/10.1609/aaai.v35i8.16832

Keywords:

Unsupervised & Self-Supervised Learning, Reinforcement Learning, Transfer/Adaptation/Multi-task/Meta/Automated Learning

Abstract

In the absence of external rewards, agents can still learn useful behaviors by identifying and mastering a set of diverse skills within their environment. Existing skill learning methods use mutual information objectives to incentivize each skill to be diverse and distinguishable from the rest. However, if care is not taken to constrain the ways in which the skills are diverse, trivially diverse skill sets can arise. To ensure useful skill diversity, we propose a novel skill learning objective, Relative Variational Intrinsic Control (RVIC), which incentivizes learning skills that are distinguishable in how they change the agent's relationship to its environment. The resulting set of skills tiles the space of affordances available to the agent. We qualitatively analyze skill behaviors on multiple environments and show how RVIC skills are more useful than skills discovered by existing methods in hierarchical reinforcement learning.

Downloads

Published

2021-05-18

How to Cite

Baumli, K., Warde-Farley, D., Hansen, S., & Mnih, V. (2021). Relative Variational Intrinsic Control. Proceedings of the AAAI Conference on Artificial Intelligence, 35(8), 6732-6740. https://doi.org/10.1609/aaai.v35i8.16832

Issue

Section

AAAI Technical Track on Machine Learning I