Trusting Your AI Agent Emotionally and Cognitively: Development and Validation of a Semantic Differential Scale for AI Trust

Authors

  • Ruoxi Shang University of Washington
  • Gary Hsieh University of Washington
  • Chirag Shah University of Washington

DOI:

https://doi.org/10.1609/aies.v7i1.31728

Abstract

Trust is not just a cognitive issue but also an emotional one, yet the research in human-AI interactions has primarily focused on the cognitive route of trust development. Recent work has highlighted the importance of studying affective trust towards AI, especially in the context of emerging human-like LLM-powered conversational agents. However, there is a lack of validated and generalizable measures for the two-dimensional construct of trust in AI agents. To address this gap, we developed and validated a set of 27-item semantic differential scales for affective and cognitive trust through a scenario-based survey study. We then further validated and applied the scale through an experiment study. Our empirical findings showed how the emotional and cognitive aspects of trust interact with each other and collectively shape a person's overall trust in AI agents. Our study methodology and findings also provide insights into the capability of the state-of-art LLMs to foster trust through different routes.

Downloads

Published

2024-10-16

How to Cite

Shang, R., Hsieh, G., & Shah, C. (2024). Trusting Your AI Agent Emotionally and Cognitively: Development and Validation of a Semantic Differential Scale for AI Trust. Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 7(1), 1343-1356. https://doi.org/10.1609/aies.v7i1.31728