Citations and Trust in LLM Generated Responses

Authors

  • Yifan Ding University of Notre Dame
  • Matthew Facciani University of Notre Dame
  • Ellen Joyce University of Notre Dame
  • Amrit Poudel University of Notre Dame
  • Sanmitra Bhattacharya Deloitte & Touche LLP
  • Balaji Veeramani Deloitte & Touche LLP
  • Sal Aguinaga Deloitte & Touche LLP
  • Tim Weninger University of Notre Dame

DOI:

https://doi.org/10.1609/aaai.v39i22.34550

Abstract

Question answering systems are rapidly advancing, but their opaque nature may impact user trust. We explored trust through an anti-monitoring framework, where trust is predicted to be correlated with presence of citations and inversely related to checking citations. We tested this hypothesis with a live question-answering experiment that presented text responses generated using a commercial Chatbot along with varying citations (zero, one, or five), both relevant and random, and recorded if participants checked the citations and their self-reported trust in the generated responses. We found a significant increase in trust when citations were present, a result that held true even when the citations were random; we also found a significant decrease in trust when participants checked the citations. These results highlight the importance of citations in enhancing trust in AI-generated content.

Downloads

Published

2025-04-11

How to Cite

Ding, Y., Facciani, M., Joyce, E., Poudel, A., Bhattacharya, S., Veeramani, B., … Weninger, T. (2025). Citations and Trust in LLM Generated Responses. Proceedings of the AAAI Conference on Artificial Intelligence, 39(22), 23787–23795. https://doi.org/10.1609/aaai.v39i22.34550

Issue

Section

AAAI Technical Track on Natural Language Processing I