Skill Disentanglement in Reproducing Kernel Hilbert Space

Authors

  • Vedant Dave Montanuniversität Leoben
  • Elmar Rueckert Montanuniversität Leoben

DOI:

https://doi.org/10.1609/aaai.v39i15.33774

Abstract

Unsupervised Skill Discovery aims at learning diverse skills without any extrinsic rewards and leverage them as prior for learning a variety of downstream tasks. Existing approaches to unsupervised reinforcement learning typically involve discovering skills through empowerment-driven techniques or by maximizing entropy to encourage exploration. However, this mutual information objective often results in either static skills that discourage exploration or maximise coverage at the expense of non-discriminable skills. Instead of focusing only on maximizing bounds on f-divergence, we combine it with Integral Probability Metrics to maximize the distance between distributions to promote behavioural diversity and enforce disentanglement. Our method, Hilbert Unsupervised Skill Discovery (HUSD), provides an additional objective that seeks to obtain exploration and separability of state-skill pairs by maximizing the Maximum Mean Discrepancy between the joint distribution of skills and states and the product of their marginals in Reproducing Kernel Hilbert Space. Our results on Unsupervised RL Benchmark show that HUSD outperforms previous exploration algorithms on state-based tasks.

Published

2025-04-11

How to Cite

Dave, V., & Rueckert, E. (2025). Skill Disentanglement in Reproducing Kernel Hilbert Space. Proceedings of the AAAI Conference on Artificial Intelligence, 39(15), 16153–16162. https://doi.org/10.1609/aaai.v39i15.33774

Issue

Section

AAAI Technical Track on Machine Learning I