Constraint Sampling Reinforcement Learning: Incorporating Expertise for Faster Learning

Authors

  • Tong Mu Stanford University
  • Georgios Theocharous Adobe Research
  • David Arbour Adobe Research
  • Emma Brunskill Stanford University

DOI:

https://doi.org/10.1609/aaai.v36i7.20753

Keywords:

Machine Learning (ML)

Abstract

Online reinforcement learning (RL) algorithms are often difficult to deploy in complex human-facing applications as they may learn slowly and have poor early performance. To address this, we introduce a practical algorithm for incorporating human insight to speed learning. Our algorithm, Constraint Sampling Reinforcement Learning (CSRL), incorporates prior domain knowledge as constraints/restrictions on the RL policy. It takes in multiple potential policy constraints to maintain robustness to misspecification of individual constraints while leveraging helpful ones to learn quickly. Given a base RL learning algorithm (ex. UCRL, DQN, Rainbow) we propose an upper confidence with elimination scheme that leverages the relationship between the constraints, and their observed performance, to adaptively switch among them. We instantiate our algorithm with DQN-type algorithms and UCRL as base algorithms, and evaluate our algorithm in four environments, including three simulators based on real data: recommendations, educational activity sequencing, and HIV treatment sequencing. In all cases, CSRL learns a good policy faster than baselines.

Downloads

Published

2022-06-28

How to Cite

Mu, T., Theocharous, G., Arbour, D., & Brunskill, E. (2022). Constraint Sampling Reinforcement Learning: Incorporating Expertise for Faster Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 36(7), 7841-7849. https://doi.org/10.1609/aaai.v36i7.20753

Issue

Section

AAAI Technical Track on Machine Learning II