Making AI Policies Transparent to Humans through Demonstrations
DOI:
https://doi.org/10.1609/aaai.v38i21.30399Keywords:
Explainable AI, Policy Summarization, Transparency, Human-agent InteractionAbstract
Demonstrations are a powerful way of increasing the transparency of AI policies to humans. Though we can approximately model human learning from demonstrations as inverse reinforcement learning, we note that human learning can differ from algorithmic learning in key ways, e.g. humans are computationally limited and may sometimes struggle to understand all of the nuances of a demonstration. Unlike related work that provide demonstrations to humans that simply maximize information gain, I leverage concepts from the human education literature, such as the zone of proximal development and scaffolding, to show demonstrations that balance informativeness and difficulty of understanding to maximize human learning.Downloads
Published
2024-03-24
How to Cite
Lee, M. S. (2024). Making AI Policies Transparent to Humans through Demonstrations. Proceedings of the AAAI Conference on Artificial Intelligence, 38(21), 23397-23398. https://doi.org/10.1609/aaai.v38i21.30399
Issue
Section
AAAI Doctoral Consortium Track