Context-Sensitive Abstractions for Reinforcement Learning with Parameterized Actions

Authors

  • Rashmeet Kaur Nayyar Arizona State University
  • Naman Shah Arizona State University Brown University
  • Siddharth Srivastava Arizona State University

DOI:

https://doi.org/10.1609/aaai.v40i29.39635

Abstract

Real-world sequential decision-making often involves parameterized action spaces that require both, decisions regarding discrete actions and decisions about continuous action parameters governing how an action is executed. Existing approaches exhibit severe limitations in this setting---planning methods demand hand-crafted action models, and standard reinforcement learning (RL) algorithms are designed for either discrete or continuous actions but not both, and the few RL methods that handle parameterized actions typically rely on domain-specific engineering and fail to exploit the latent structure of these spaces. This paper extends the scope of RL algorithms to long-horizon, sparse-reward settings with parameterized actions by enabling agents to autonomously learn both state and action abstractions online. We introduce algorithms that progressively refine these abstractions during learning, increasing fine-grained detail in the critical regions of the state–action space where greater resolution improves performance. Across several continuous-state, parameterized-action domains, our abstraction-driven approach enables TD(λ) to achieve markedly higher sample efficiency than state-of-the-art baselines.

Published

2026-03-14

How to Cite

Nayyar, R. K., Shah, N., & Srivastava, S. (2026). Context-Sensitive Abstractions for Reinforcement Learning with Parameterized Actions. Proceedings of the AAAI Conference on Artificial Intelligence, 40(29), 24522-24531. https://doi.org/10.1609/aaai.v40i29.39635

Issue

Section

AAAI Technical Track on Machine Learning VI