Deceptive Reinforcement Learning in Model-Free Domains

Authors

  • Alan Lewis University of Melbourne
  • Tim Miller University of Melbourne

DOI:

https://doi.org/10.1609/icaps.v33i1.27240

Keywords:

Reinforcement Learning

Abstract

This paper investigates deceptive reinforcement learning for privacy preservation in model-free and continuous action space domains. In reinforcement learning, the reward function defines the agent's objective. In adversarial scenarios, an agent may need to both maximise rewards and keep its reward function private from observers. Recent research presented the ambiguity model (AM), which selects actions that are ambiguous over a set of possible reward functions, via pre-trained Q-functions. Despite promising results in model-based domains, our investigation shows that AM is ineffective in model-free domains due to misdirected state space exploration. It is also inefficient to train and inapplicable in continuous action spaces. We propose the deceptive exploration ambiguity model (DEAM), which learns using the deceptive policy during training, leading to targeted exploration of the state space. DEAM is also applicable in continuous action spaces. We evaluate DEAM in discrete and continuous action space path planning environments. DEAM achieves similar performance to an optimal model-based version of AM and outperforms a model-free version of AM in terms of path cost, deceptiveness and training efficiency. These results extend to the continuous domain.

Downloads

Published

2023-07-01

How to Cite

Lewis, A., & Miller, T. (2023). Deceptive Reinforcement Learning in Model-Free Domains. Proceedings of the International Conference on Automated Planning and Scheduling, 33(1), 587-595. https://doi.org/10.1609/icaps.v33i1.27240