On Estimating the Gradient of the Expected Information Gain in Bayesian Experimental Design
DOI:
https://doi.org/10.1609/aaai.v38i18.30012Keywords:
RU: Decision/Utility Theory, RU: Probabilistic Inference, RU: Stochastic OptimizationAbstract
Bayesian Experimental Design (BED), which aims to find the optimal experimental conditions for Bayesian inference, is usually posed as to optimize the expected information gain (EIG). The gradient information is often needed for efficient EIG optimization, and as a result the ability to estimate the gradient of EIG is essential for BED problems. The primary goal of this work is to develop methods for estimating the gradient of EIG, which, combined with the stochastic gradient descent algorithms, result in efficient optimization of EIG. Specifically, we first introduce a posterior expected representation of the EIG gradient with respect to the design variables. Based on this, we propose two methods for estimating the EIG gradient, UEEG-MCMC that leverages posterior samples generated through Markov Chain Monte Carlo (MCMC) to estimate the EIG gradient, and BEEG-AP that focuses on achieving high simulation efficiency by repeatedly using parameter samples. Theoretical analysis and numerical studies illustrate that UEEG-MCMC is robust agains the actual EIG value, while BEEG-AP is more efficient when the EIG value to be optimized is small. Moreover, both methods show superior performance compared to several popular benchmarks in our numerical experiments.Downloads
Published
2024-03-24
How to Cite
Ao, Z., & Li , J. . (2024). On Estimating the Gradient of the Expected Information Gain in Bayesian Experimental Design. Proceedings of the AAAI Conference on Artificial Intelligence, 38(18), 20311-20319. https://doi.org/10.1609/aaai.v38i18.30012
Issue
Section
AAAI Technical Track on Reasoning under Uncertainty