Entropy Regularization for Population Estimation

Authors

  • Ben Chugg Carnegie Mellon University
  • Peter Henderson Stanford University
  • Jacob Goldin University of Chicago
  • Daniel E. Ho Stanford University

DOI:

https://doi.org/10.1609/aaai.v37i10.26438

Keywords:

RU: Sequential Decision Making, ML: Active Learning, ML: Online Learning & Bandits

Abstract

Entropy regularization is known to improve exploration in sequential decision-making problems. We show that this same mechanism can also lead to nearly unbiased and lower-variance estimates of the mean reward in the optimize-and-estimate structured bandit setting. Mean reward estimation (i.e., population estimation) tasks have recently been shown to be essential for public policy settings where legal constraints often require precise estimates of population metrics. We show that leveraging entropy and KL divergence can yield a better trade-off between reward and estimator variance than existing baselines, all while remaining nearly unbiased. These properties of entropy regularization illustrate an exciting potential for bringing together the optimal exploration and estimation literature.

Downloads

Published

2023-06-26

How to Cite

Chugg, B., Henderson, P., Goldin, J., & Ho, D. E. (2023). Entropy Regularization for Population Estimation. Proceedings of the AAAI Conference on Artificial Intelligence, 37(10), 12198-12204. https://doi.org/10.1609/aaai.v37i10.26438

Issue

Section

AAAI Technical Track on Reasoning Under Uncertainty