Training Humans for Robust Human-Agent Teaming: Knowing When to Engage with an AI Partner

Authors

  • Leon Lange University of California, San Diego
  • Qiao Zhang Georgia Institute of Technology
  • Christopher J. MacLellan Georgia Institute of Technology
  • Ying Wu University of California, San Diego

DOI:

https://doi.org/10.1609/aaaiss.v5i1.35563

Abstract

Learning to team with an AI counterpart can be challenging – particularly in the context of an unfamiliar task that must also be learned. This study compares the impacts of scaffolded versus self-paced training on human-AI agent teams negotiating a novel logistics and sustainment task. It was found that guiding participants early on in how to leverage AI assistance (scaffolded practice) led to much more robust teaming than allowing them to learn at their own pace. Additionally, teams whose human counterpart received scaffolded practice tended to achieve higher scores than those who learned under self-direction. Post-hoc analysis also revealed that \textit{when} human teammembers leveraged the agent was of particular importance -- with the greatest impact of human-AI teaming observed in the most high stakes periods of the game. Taken together, these findings demonstrate not only that some forms of training are more beneficial than others for human-AI agent teaming -- but also, that context-specific learning on the fly is important for effective team performance.

Downloads

Published

2025-05-28

How to Cite

Lange, L., Zhang, Q., MacLellan, C. J., & Wu, Y. (2025). Training Humans for Robust Human-Agent Teaming: Knowing When to Engage with an AI Partner. Proceedings of the AAAI Symposium Series, 5(1), 83–86. https://doi.org/10.1609/aaaiss.v5i1.35563

Issue

Section

Current and Future Varieties of Human-AI Collaboration