Explainable Agency for Intelligent Autonomous Systems

Authors

  • Pat Langley University of Auckland
  • Ben Meadows University of Auckland
  • Mohan Sridharan University of Auckland
  • Dongkyu Choi University of Kansas

DOI:

https://doi.org/10.1609/aaai.v31i2.19108

Abstract

As intelligent agents become more autonomous, sophisti- cated, and prevalent, it becomes increasingly important that humans interact with them effectively. Machine learning is now used regularly to acquire expertise, but common techniques produce opaque content whose behavior is difficult to interpret. Before they will be trusted by humans, autonomous agents must be able to explain their decisions and the reasoning that produced their choices. We will refer to this general ability as explainable agency. This capacity for explaining decisions is not an academic exercise. When a self-driving vehicle takes an unfamiliar turn, its passenger may desire to know its reasons. When a synthetic ally in a computer game blocks a player’s path, he may want to understand its purpose. When an autonomous military robot has abandoned a high-priority goal to pursue another one, its commander may request justification. As robots, vehicles, and synthetic characters become more selfreliant, people will require that they explain their behaviors on demand. The more impressive these agents’ abilities, the more essential that we be able to understand them.

Downloads

Published

2017-02-11

How to Cite

Langley, P., Meadows, B., Sridharan, M., & Choi, D. (2017). Explainable Agency for Intelligent Autonomous Systems. Proceedings of the AAAI Conference on Artificial Intelligence, 31(2), 4762-4763. https://doi.org/10.1609/aaai.v31i2.19108