Learning Uncertainty-Aware Temporally-Extended Actions
DOI:
https://doi.org/10.1609/aaai.v38i12.29241Keywords:
ML: Reinforcement Learning, ML: Deep Learning AlgorithmsAbstract
In reinforcement learning, temporal abstraction in the action space, exemplified by action repetition, is a technique to facilitate policy learning through extended actions. However, a primary limitation in previous studies of action repetition is its potential to degrade performance, particularly when sub-optimal actions are repeated. This issue often negates the advantages of action repetition. To address this, we propose a novel algorithm named Uncertainty-aware Temporal Extension (UTE). UTE employs ensemble methods to accurately measure uncertainty during action extension. This feature allows policies to strategically choose between emphasizing exploration or adopting an uncertainty-averse approach, tailored to their specific needs. We demonstrate the effectiveness of UTE through experiments in Gridworld and Atari 2600 environments. Our findings show that UTE outperforms existing action repetition algorithms, effectively mitigating their inherent limitations and significantly enhancing policy learning efficiency.Downloads
Published
2024-03-24
How to Cite
Lee, J., Park, S. J., Tang, Y., & Oh, M.- hwan. (2024). Learning Uncertainty-Aware Temporally-Extended Actions. Proceedings of the AAAI Conference on Artificial Intelligence, 38(12), 13391-13399. https://doi.org/10.1609/aaai.v38i12.29241
Issue
Section
AAAI Technical Track on Machine Learning III