Active Reinforcement Learning Strategies for Offline Policy Improvement

Authors

  • Ambedkar Dukkipati Indian Institute of Science
  • Ranga Shaarad Ayyagari Indian Institute of Science
  • Bodhisattwa Dasgupta Indian Institute of Science
  • Parag Dutta Indian Institute of Science
  • Prabhas Reddy Onteru Indian Institute of Science

DOI:

https://doi.org/10.1609/aaai.v39i16.33803

Abstract

Learning agents that excel at sequential decision-making tasks must continuously resolve the problem of exploration and exploitation for optimal learning. However, such interactions with the environment online might be prohibitively expensive and may involve some constraints, such as a limited budget for agent-environment interactions and restricted exploration in certain regions of the state space. Examples include selecting candidates for medical trials and training agents in complex navigation environments. This problem necessitates the study of active reinforcement learning strategies that collect minimal additional experience trajectories by reusing existing offline data previously collected by some unknown behavior policy. In this work, we propose an active reinforcement learning method capable of collecting trajectories that can augment existing offline data. With extensive experimentation, we demonstrate that our proposed method reduces additional online interaction with the environment by up to 75% over competitive baselines across various continuous control environments such as Gym-MuJoCo locomotion environments as well as Maze2d, AntMaze, CARLA and IsaacSimGo1. To the best of our knowledge, this is the first work that addresses the active learning problem in the context of sequential decision-making and reinforcement learning.

Published

2025-04-11

How to Cite

Dukkipati, A., Ayyagari, R. S., Dasgupta, B., Dutta, P., & Onteru, P. R. (2025). Active Reinforcement Learning Strategies for Offline Policy Improvement. Proceedings of the AAAI Conference on Artificial Intelligence, 39(16), 16418–16425. https://doi.org/10.1609/aaai.v39i16.33803

Issue

Section

AAAI Technical Track on Machine Learning II