Towards Robust Off-Policy Learning for Runtime Uncertainty

Authors

  • Da Xu Walmart Labs
  • Yuting Ye UC Berkeley
  • Chuanwei Ruan Instacart
  • Bo Yang Linkedin

DOI:

https://doi.org/10.1609/aaai.v36i9.21249

Keywords:

Reasoning Under Uncertainty (RU), Planning, Routing, And Scheduling (PRS), Domain(s) Of Application (APP)

Abstract

Off-policy learning plays a pivotal role in optimizing and evaluating policies prior to the online deployment. However, during the real-time serving, we observe varieties of interventions and constraints that cause inconsistency between the online and offline setting, which we summarize and term as runtime uncertainty. Such uncertainty cannot be learned from the logged data due to its abnormality and rareness nature. To assert a certain level of robustness, we perturb the off-policy estimators along an adversarial direction in view of the runtime uncertainty. It allows the resulting estimators to be robust not only to observed but also unexpected runtime uncertainties. Leveraging this idea, we bring runtime-uncertainty robustness to three major off-policy learning methods: the inverse propensity score method, reward-model method, and doubly robust method. We theoretically justify the robustness of our methods to runtime uncertainty, and demonstrate their effectiveness using both the simulation and the real-world online experiments.

Downloads

Published

2022-06-28

How to Cite

Xu, D., Ye, Y., Ruan, C., & Yang, B. (2022). Towards Robust Off-Policy Learning for Runtime Uncertainty. Proceedings of the AAAI Conference on Artificial Intelligence, 36(9), 10101-10109. https://doi.org/10.1609/aaai.v36i9.21249

Issue

Section

AAAI Technical Track on Reasoning under Uncertainty