Efficient Reinforcement Learning for Real-Time Hardware-Based Energy System Experiments

Authors

  • Alexander Stevenson National Renewable Energy Laboratory Florida International University
  • Mayank Panwar National Renewable Energy Laboratory
  • Rob Hovsapian National Renewable Energy Laboratory
  • Arif Sarwat Florida International University

DOI:

https://doi.org/10.1609/aaaiss.v2i1.27663

Keywords:

Reinforcement Learning, Digital Real Time Simulation, Deep Q-learning, Surrogate Modeling

Abstract

In the context of urgent climate challenges and the pressing need for rapid technology development, Reinforcement Learning (RL) stands as a compelling data-driven method for controlling real-world physical systems. However, RL implementation often entails time-consuming and computationally intensive data collection and training processes, rendering them inefficient for real-time applications that lack non-real-time models. To address these limitations, real-time emulation techniques have emerged as valuable tools for the lab-scale rapid prototyping of intricate energy systems. While emulated systems offer a bridge between simulation and reality, they too face constraints, hindering comprehensive characterization, testing, and development. In this research, we construct a surrogate model using limited data from simulated systems, enabling an efficient and effective training process for a Double Deep Q-Network (DDQN) agent for future deployment. Our approach is illustrated through a hydropower application, demonstrating the practical impact of our approach on climate-related technology development.

Downloads

Published

2024-01-22

How to Cite

Stevenson, A., Panwar, M., Hovsapian, R., & Sarwat, A. (2024). Efficient Reinforcement Learning for Real-Time Hardware-Based Energy System Experiments. Proceedings of the AAAI Symposium Series, 2(1), 153-158. https://doi.org/10.1609/aaaiss.v2i1.27663

Issue

Section

Artificial Intelligence and Climate: The Role of AI in a Climate-Smart Sustainable Future