Artificial Trust in Mutually Adaptive Human-Machine Teams

Authors

  • Carolina Centeio Jorge Delft University of Technology University of Michigan
  • Ewart J de Visser U.S. Air Force Academy
  • Myrthe L Tielman Delft University of Technology
  • Catholijn M Jonker Delft University of Technology University of Leiden
  • Lionel P Robert University of Michigan

DOI:

https://doi.org/10.1609/aaaiss.v4i1.31766

Abstract

As machines' autonomy increases, their capacity to learn and adapt to humans in collaborative scenarios increases too. In particular, machines can use artificial trust (AT) to make decisions, such as task and role allocation/selection. However, the outcome of such decisions and the way these are communicated can affect the human's trust, which in turn affects how the human collaborates too. With the goal of maintaining mutual appropriate trust between the human and the machine in mind, we reflect on the requirements for having an AT-based decision-making model on an artificial teammate. Furthermore, we propose a user study to investigate the role of task-based willingness (e.g. human preferences on tasks) and its communication in AT-based decision-making.

Downloads

Published

2024-11-08

How to Cite

Centeio Jorge, C., de Visser, E. J., Tielman, M. L., Jonker, C. M., & Robert, L. P. (2024). Artificial Trust in Mutually Adaptive Human-Machine Teams. Proceedings of the AAAI Symposium Series, 4(1), 18-23. https://doi.org/10.1609/aaaiss.v4i1.31766

Issue

Section

AI Trustworthiness and Risk Assessment for Challenging Contexts (ATRACC)