Act-Then-Measure: Reinforcement Learning for Partially Observable Environments with Active Measuring

Authors

  • Merlijn Krale Radboud University, Nijmegen
  • Thiago D. Simão Radboud University, Nijmegen
  • Nils Jansen Radboud University, Nijmegen

DOI:

https://doi.org/10.1609/icaps.v33i1.27197

Keywords:

Learning for planning and scheduling, Partially observable and unobservable domains, Uncertainty and stochasticity in planning and scheduling

Abstract

We study Markov decision processes (MDPs), where agents control when and how they gather information, as formalized by action-contingent noiselessly observable MDPs (ACNO-MPDs). In these models, actions have two components: a control action that influences how the environment changes and a measurement action that affects the agent's observation. To solve ACNO-MDPs, we introduce the act-then-measure (ATM) heuristic, which assumes that we can ignore future state uncertainty when choosing control actions. To decide whether or not to measure, we introduce the concept of measuring value. We show how following this heuristic may lead to shorter policy computation times and prove a bound on the performance loss it incurs. We develop a reinforcement learning algorithm based on the ATM heuristic, using a Dyna-Q variant adapted for partially observable domains, and showcase its superior performance compared to prior methods on a number of partially-observable environments.

Downloads

Published

2023-07-01

How to Cite

Krale, M., Simão, T. D., & Jansen, N. (2023). Act-Then-Measure: Reinforcement Learning for Partially Observable Environments with Active Measuring. Proceedings of the International Conference on Automated Planning and Scheduling, 33(1), 212-220. https://doi.org/10.1609/icaps.v33i1.27197