Distributionally Robust Counterfactual Risk Minimization

Authors

  • Louis Faury Criteo AI Lab
  • Ugo Tanielian Criteo AI Lab
  • Elvis Dohmatob Criteo AI Lab
  • Elena Smirnova Criteo AI Lab
  • Flavian Vasile Criteo AI Lab

DOI:

https://doi.org/10.1609/aaai.v34i04.5797

Abstract

This manuscript introduces the idea of using Distributionally Robust Optimization (DRO) for the Counterfactual Risk Minimization (CRM) problem. Tapping into a rich existing literature, we show that DRO is a principled tool for counterfactual decision making. We also show that well-established solutions to the CRM problem like sample variance penalization schemes are special instances of a more general DRO problem. In this unifying framework, a variety of distributionally robust counterfactual risk estimators can be constructed using various probability distances and divergences as uncertainty measures. We propose the use of Kullback-Leibler divergence as an alternative way to model uncertainty in CRM and derive a new robust counterfactual objective. In our experiments, we show that this approach outperforms the state-of-the-art on four benchmark datasets, validating the relevance of using other uncertainty measures in practical applications.

Downloads

Published

2020-04-03

How to Cite

Faury, L., Tanielian, U., Dohmatob, E., Smirnova, E., & Vasile, F. (2020). Distributionally Robust Counterfactual Risk Minimization. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 3850-3857. https://doi.org/10.1609/aaai.v34i04.5797

Issue

Section

AAAI Technical Track: Machine Learning