Low Emission Building Control with Zero-Shot Reinforcement Learning

Authors

  • Scott Jeen University of Cambridge Alan Turing Institute
  • Alessandro Abate University of Oxford Alan Turing Institute
  • Jonathan M. Cullen University of Cambridge

DOI:

https://doi.org/10.1609/aaai.v37i12.26668

Keywords:

General

Abstract

Heating and cooling systems in buildings account for 31% of global energy use, much of which are regulated by Rule Based Controllers (RBCs) that neither maximise energy efficiency nor minimise emissions by interacting optimally with the grid. Control via Reinforcement Learning (RL) has been shown to significantly improve building energy efficiency, but existing solutions require access to building-specific simulators or data that cannot be expected for every building in the world. In response, we show it is possible to obtain emission-reducing policies without such knowledge a priori–a paradigm we call zero-shot building control. We combine ideas from system identification and model-based RL to create PEARL (Probabilistic Emission-Abating Reinforcement Learning) and show that a short period of active exploration is all that is required to build a performant model. In experiments across three varied building energy simulations, we show PEARL outperforms an existing RBC once, and popular RL baselines in all cases, reducing building emissions by as much as 31% whilst maintaining thermal comfort. Our source code is available online via: https://enjeeneer.io/projects/pearl/.

Downloads

Published

2023-06-26

How to Cite

Jeen, S., Abate, A., & Cullen, J. M. (2023). Low Emission Building Control with Zero-Shot Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 37(12), 14259-14267. https://doi.org/10.1609/aaai.v37i12.26668

Issue

Section

AAAI Special Track on AI for Social Impact