Improving Bayesian Neural Networks by Adversarial Sampling
Keywords:Reasoning Under Uncertainty (RU)
AbstractBayesian neural networks (BNNs) have drawn extensive interest due to the unique probabilistic representation framework. However, Bayesian neural networks have limited publicized deployments because of the relatively poor model performance in real-world applications. In this paper, we argue that the randomness of sampling in Bayesian neural networks causes errors in the updating of model parameters during training and some sampled models with poor performance in testing. To solve this, we propose to train Bayesian neural networks with Adversarial Distribution as a theoretical solution. To avoid the difficulty of calculating Adversarial Distribution analytically, we further present the Adversarial Sampling method as an approximation in practice. We conduct extensive experiments with multiple network structures on different datasets, e.g., CIFAR-10 and CIFAR-100. Experimental results validate the correctness of the theoretical analysis and the effectiveness of the Adversarial Sampling on improving model performance. Additionally, models trained with Adversarial Sampling still keep their ability to model uncertainties and perform better when predictions are retained according to the uncertainties, which further verifies the generality of the Adversarial Sampling approach.
How to Cite
Zhang, J., Hua, Y., Song, T., Wang, H., Xue, Z., Ma, R., & Guan, H. (2022). Improving Bayesian Neural Networks by Adversarial Sampling. Proceedings of the AAAI Conference on Artificial Intelligence, 36(9), 10110-10117. https://doi.org/10.1609/aaai.v36i9.21250
AAAI Technical Track on Reasoning under Uncertainty