Using Imagery to Simplify Perceptual Abstraction in Reinforcement Learning Agents

Authors

  • Samuel Wintermute University of Michigan, Ann Arbor

DOI:

https://doi.org/10.1609/aaai.v24i1.7570

Keywords:

reinforcement learning, imagery, abstraction, cognitive architecture, Soar, state-action aggregation, state aggregation, spatial reasoning

Abstract

In this paper, we consider the problem of reinforcement learning in spatial tasks. These tasks have many states that can be aggregated together to improve learning efficiency. In an agent, this aggregation can take the form of selecting appropriate perceptual processes to arrive at a qualitative abstraction of the underlying continuous state. However, for arbitrary problems, an agent is unlikely to have the perceptual processes necessary to discriminate all relevant states in terms of such an abstraction.

To help compensate for this, reinforcement learning can be integrated with an imagery system, where simple models of physical processes are applied within a low-level perceptual representation to predict the state resulting from an action. Rather than abstracting the current state, abstraction can be applied to the predicted next state. Formally, it is shown that this integration broadens the class of perceptual abstraction methods that can be used while preserving the underlying problem. Empirically, it is shown that this approach can be used in complex domains, and can be beneficial even when formal requirements are not met.

Downloads

Published

2010-07-05

How to Cite

Wintermute, S. (2010). Using Imagery to Simplify Perceptual Abstraction in Reinforcement Learning Agents. Proceedings of the AAAI Conference on Artificial Intelligence, 24(1), 1567-1573. https://doi.org/10.1609/aaai.v24i1.7570