Human-Robot Trust and Cooperation Through a Game Theoretic Framework

Authors

  • Erin Paeng Harvey Mudd College
  • Jane Wu Harvey Mudd College
  • James Boerkoel Harvey Mudd College

DOI:

https://doi.org/10.1609/aaai.v30i1.9961

Keywords:

Trust, Human-robot Interaction

Abstract

Trust and cooperation are fundamental to human interactions. How much we trust other people directly influences the decisions we make and our willingness to cooperate. It thus seems natural that trust be equally important in successful human-robot interaction (HRI), since how much a human trusts a robot affects how they might interact with it. We propose using a coin entrustment game, a variant of prisoner’s dilemma, to measure trust and cooperation as separate phenomenon between human and robot agents. With this game, we test the following hypotheses: (1) Humans will achieve and maintain higher levels of trust when interacting with what they believe to be a robot than with another human; and (2) humans will cooperate more readily with robots and will maintain a higher level of cooperation. This work contributes an experimental paradigm that uses the coin entrustment game as a way to test our hypotheses. Our empirical analysis shows that humans tend to trust robots to a greater degree than other humans, while cooperating equally well in both.

Downloads

Published

2016-03-05

How to Cite

Paeng, E., Wu, J., & Boerkoel, J. (2016). Human-Robot Trust and Cooperation Through a Game Theoretic Framework. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.9961