Explaining Generalization Power of a DNN Using Interactive Concepts
DOI:
https://doi.org/10.1609/aaai.v38i15.29655Keywords:
ML: Transparent, Interpretable, Explainable MLAbstract
This paper explains the generalization power of a deep neural network (DNN) from the perspective of interactions. Although there is no universally accepted definition of the concepts encoded by a DNN, the sparsity of interactions in a DNN has been proved, i.e., the output score of a DNN can be well explained by a small number of interactions between input variables. In this way, to some extent, we can consider such interactions as interactive concepts encoded by the DNN. Therefore, in this paper, we derive an analytic explanation of inconsistency of concepts of different complexities. This may shed new lights on using the generalization power of concepts to explain the generalization power of the entire DNN. Besides, we discover that the DNN with stronger generalization power usually learns simple concepts more quickly and encodes fewer complex concepts. We also discover the detouring dynamics of learning complex concepts, which explains both the high learning difficulty and the low generalization power of complex concepts. The code will be released when the paper is accepted.Downloads
Published
2024-03-24
How to Cite
Zhou, H., Zhang, H., Deng, H., Liu, D., Shen, W., Chan, S.-H., & Zhang, Q. (2024). Explaining Generalization Power of a DNN Using Interactive Concepts. Proceedings of the AAAI Conference on Artificial Intelligence, 38(15), 17105-17113. https://doi.org/10.1609/aaai.v38i15.29655
Issue
Section
AAAI Technical Track on Machine Learning VI