Representation-Based Robustness in Goal-Conditioned Reinforcement Learning

Authors

  • Xiangyu Yin University of Liverpool
  • Sihao Wu University of Liverpool
  • Jiaxu Liu University of Liverpool
  • Meng Fang University of Liverpool
  • Xingyu Zhao University of Warwick
  • Xiaowei Huang University of Liverpool
  • Wenjie Ruan University of Liverpool

DOI:

https://doi.org/10.1609/aaai.v38i19.30176

Keywords:

General

Abstract

While Goal-Conditioned Reinforcement Learning (GCRL) has gained attention, its algorithmic robustness against adversarial perturbations remains unexplored. The attacks and robust representation training methods that are designed for traditional RL become less effective when applied to GCRL. To address this challenge, we first propose the Semi-Contrastive Representation attack, a novel approach inspired by the adversarial contrastive attack. Unlike existing attacks in RL, it only necessitates information from the policy function and can be seamlessly implemented during deployment. Then, to mitigate the vulnerability of existing GCRL algorithms, we introduce Adversarial Representation Tactics, which combines Semi-Contrastive Adversarial Augmentation with Sensitivity-Aware Regularizer to improve the adversarial robustness of the underlying RL agent against various types of perturbations. Extensive experiments validate the superior performance of our attack and defence methods across multiple state-of-the-art GCRL algorithms. Our code is available at https://github.com/TrustAI/ReRoGCRL.

Published

2024-03-24

How to Cite

Yin, X., Wu, S., Liu, J., Fang, M., Zhao, X., Huang, X., & Ruan, W. (2024). Representation-Based Robustness in Goal-Conditioned Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(19), 21761-21769. https://doi.org/10.1609/aaai.v38i19.30176

Issue

Section

AAAI Technical Track on Safe, Robust and Responsible AI Track