Self-Supervised Attention-Aware Reinforcement Learning

Authors

  • Haiping Wu McGill University Mila
  • Khimya Khetarpal McGill University Mila
  • Doina Precup McGill University Mila Google DeepMind

Keywords:

Reinforcement Learning

Abstract

Visual saliency has emerged as a major visualization tool for interpreting deep reinforcement learning (RL) agents. However, much of the existing research uses it as an analyzing tool rather than an inductive bias for policy learning. In this work, we use visual attention as an inductive bias for RL agents. We propose a novel self-supervised attention learning approach which can 1. learn to select regions of interest without explicit annotations, and 2. act as a plug for existing deep RL methods to improve the learning performance. We empirically show that the self-supervised attention-aware deep RL methods outperform the baselines in the context of both the rate of convergence and performance. Furthermore, the proposed self-supervised attention is not tied with specific policies, nor restricted to a specific scene. We posit that the proposed approach is a general self-supervised attention module for multi-task learning and transfer learning, and empirically validate the generalization ability of the proposed method. Finally, we show that our method learns meaningful object keypoints highlighting improvements both qualitatively and quantitatively.

Downloads

Published

2021-05-18

How to Cite

Wu, H., Khetarpal, K., & Precup, D. (2021). Self-Supervised Attention-Aware Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 10311-10319. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17235

Issue

Section

AAAI Technical Track on Machine Learning V