Interactive Concept Bottleneck Models

Authors

  • Kushal Chauhan Google Research India
  • Rishabh Tiwari Google Research India
  • Jan Freyberg Google Health India
  • Pradeep Shenoy Google Research India
  • Krishnamurthy Dvijotham Google Research India

DOI:

https://doi.org/10.1609/aaai.v37i5.25736

Keywords:

HAI: Human-Machine Teams, CV: Applications, CV: Interpretability and Transparency, HAI: Human-Computer Interaction, ML: Calibration & Uncertainty Quantification, ML: Transparent, Interpretable, Explainable ML, RU: Applications

Abstract

Concept bottleneck models (CBMs) are interpretable neural networks that first predict labels for human-interpretable concepts relevant to the prediction task, and then predict the final label based on the concept label predictions. We extend CBMs to interactive prediction settings where the model can query a human collaborator for the label to some concepts. We develop an interaction policy that, at prediction time, chooses which concepts to request a label for so as to maximally improve the final prediction. We demonstrate that a simple policy combining concept prediction uncertainty and influence of the concept on the final prediction achieves strong performance and outperforms static approaches as well as active feature acquisition methods proposed in the literature. We show that the interactive CBM can achieve accuracy gains of 5-10% with only 5 interactions over competitive baselines on the Caltech-UCSD Birds, CheXpert and OAI datasets.

Downloads

Published

2023-06-26

How to Cite

Chauhan, K., Tiwari, R., Freyberg, J., Shenoy, P., & Dvijotham, K. (2023). Interactive Concept Bottleneck Models. Proceedings of the AAAI Conference on Artificial Intelligence, 37(5), 5948-5955. https://doi.org/10.1609/aaai.v37i5.25736

Issue

Section

AAAI Technical Track on Humans and AI