Towards Sample Efficient Agents through Algorithmic Alignment (Student Abstract)

Authors

  • Mingxuan Li Brown University
  • Michael L. Littman Brown University

Keywords:

Sample Efficiency, Reinforcement Learning, Algorithmic Alignment, Graph Network, Value Iteration

Abstract

In this work, we propose and explore Deep Graph Value Network (DeepGV) as a promising method to work around sample complexity in deep reinforcement-learning agents using a message-passing mechanism. The main idea is that the agent should be guided by structured non-neural-network algorithms like dynamic programming. According to recent advances in algorithmic alignment, neural networks with structured computation procedures can be trained efficiently. We demonstrate the potential of graph neural network in supporting sample efficient learning by showing that Deep Graph Value Network can outperform unstructured baselines by a large margin in solving Markov Decision Process (MDP). We believe this would open up a new avenue for structured agents design. See https://github.com/drmeerkat/Deep-Graph-Value-Network for the code.

Downloads

Published

2021-05-18

How to Cite

Li, M., & Littman, M. L. (2021). Towards Sample Efficient Agents through Algorithmic Alignment (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 35(18), 15827-15828. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17910

Issue

Section

AAAI Student Abstract and Poster Program