Design Amortization for Bayesian Optimal Experimental Design
DOI:
https://doi.org/10.1609/aaai.v37i7.25992Keywords:
ML: Probabilistic Methods, ML: Applications, ML: Bayesian Learning, ML: Deep Generative Models & AutoencodersAbstract
Bayesian optimal experimental design is a sub-field of statistics focused on developing methods to make efficient use of experimental resources. Any potential design is evaluated in terms of a utility function, such as the (theoretically well-justified) expected information gain (EIG); unfortunately however, under most circumstances the EIG is intractable to evaluate. In this work we build off of successful variational approaches, which optimize a parameterized variational model with respect to bounds on the EIG. Past work focused on learning a new variational model from scratch for each new design considered. Here we present a novel neural architecture that allows experimenters to optimize a single variational model that can estimate the EIG for potentially infinitely many designs. To further improve computational efficiency, we also propose to train the variational model on a significantly cheaper-to-evaluate lower bound, and show empirically that the resulting model provides an excellent guide for more accurate, but expensive to evaluate bounds on the EIG. We demonstrate the effectiveness of our technique on generalized linear models, a class of statistical models that is widely used in the analysis of controlled experiments. Experiments show that our method is able to greatly improve accuracy over existing approximation strategies, and achieve these results with far better sample efficiency.Downloads
Published
2023-06-26
How to Cite
Kennamer, N., Walton, S., & Ihler, A. (2023). Design Amortization for Bayesian Optimal Experimental Design. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 8220-8227. https://doi.org/10.1609/aaai.v37i7.25992
Issue
Section
AAAI Technical Track on Machine Learning II