Explanation Bottleneck Models

Authors

  • Shin'ya Yamaguchi NTT Kyoto University
  • Kosuke Nishida NTT

DOI:

https://doi.org/10.1609/aaai.v39i20.35495

Abstract

Recent concept-based interpretable models have succeeded in providing meaningful explanations by pre-defined concept sets. However, the dependency on the pre-defined concepts restricts the application because of the limited number of concepts for explanations. This paper proposes a novel interpretable deep neural network called explanation bottleneck models (XBMs). XBMs generate a text explanation from the input without pre-defined concepts and then predict a final task prediction based on the generated explanation by leveraging pre-trained vision-language encoder-decoder models. To achieve both the target task performance and the explanation quality, we train XBMs through the target task loss with the regularization penalizing the explanation decoder via the distillation from the frozen pre-trained decoder. Our experiments, including a comparison to state-of-the-art concept bottleneck models, confirm that XBMs provide accurate and fluent natural language explanations without pre-defined concept sets.

Published

2025-04-11

How to Cite

Yamaguchi, S., & Nishida, K. (2025). Explanation Bottleneck Models. Proceedings of the AAAI Conference on Artificial Intelligence, 39(20), 21886–21894. https://doi.org/10.1609/aaai.v39i20.35495

Issue

Section

AAAI Technical Track on Machine Learning VI