On the Probabilistic Learnability of Compact Neural Network Preimage Bounds
DOI:
https://doi.org/10.1609/aaai.v40i42.40883Abstract
Although recent provable methods have been developed to compute preimage bounds for neural networks, their scalability is fundamentally limited by the #P-hardness of the problem. In this work, we adopt a novel probabilistic perspective, aiming to deliver solutions with high-confidence guarantees and bounded error. To this end, we investigate the potential of bootstrap-based and randomized approaches that are capable of capturing complex patterns in high-dimensional spaces, including input regions where a given output property holds. In detail, we introduce Random Forest Property Verifier (RF-ProVe), a method that exploits an ensemble of randomized decision trees to generate candidate input regions satisfying a desired output property and refines them through active resampling. Our theoretical derivations offer formal statistical guarantees on region purity and global coverage, providing a practical, scalable solution for computing compact preimage approximations in cases where exact solvers fail to scale.Downloads
Published
2026-03-14
How to Cite
Marzari, L., Bicego, M., Cicalese, F., & Farinelli, A. (2026). On the Probabilistic Learnability of Compact Neural Network Preimage Bounds. Proceedings of the AAAI Conference on Artificial Intelligence, 40(42), 35707–35714. https://doi.org/10.1609/aaai.v40i42.40883
Issue
Section
AAAI Technical Track on Philosophy and Ethics of AI