Tractable Sharpness-Aware Learning of Probabilistic Circuits
DOI:
https://doi.org/10.1609/aaai.v40i30.39771Abstract
Probabilistic Circuits (PCs) are a class of generative models that allow exact and tractable inference for a wide range of queries. While recent developments have enabled the learning of deep and expressive PCs, this increased capacity can often lead to overfitting, especially when data is limited. We analyze PC overfitting from a log-likelihood-landscape perspective and show that it is often caused by convergence to sharp optima that generalize poorly. Inspired by sharpness aware minimization in neural networks, we propose a Hessian-based regularizer for training PCs. As a key contribution, we show that the trace of the Hessian of the log-likelihood--a sharpness proxy that is typically intractable in deep neural networks--can be computed efficiently for PCs. Minimizing this Hessian trace induces a gradient-norm-based regularizer that yields simple closed-form parameter updates for EM, and integrates seamlessly with gradient based learning methods. Experiments on synthetic and real-world datasets demonstrate that our method consistently guides PCs toward flatter minima, improving generalization performance.Downloads
Published
2026-03-14
How to Cite
Suresh, H., Sidheekh, S., P, V. S. M., Natarajan, S., & Chatapuram Krishnan, N. (2026). Tractable Sharpness-Aware Learning of Probabilistic Circuits. Proceedings of the AAAI Conference on Artificial Intelligence, 40(30), 25736–25744. https://doi.org/10.1609/aaai.v40i30.39771
Issue
Section
AAAI Technical Track on Machine Learning VII