Accommodate Knowledge Conflicts in Retrieval-augmented LLMs: Towards Robust Response Generation in the Wild

Authors

  • Jiatai Wang The College of Computer Science, Nankai University, Tianjin, China Haihe Lab of ITAI, Tianjin, China
  • Zhiwei Xu Haihe Lab of ITAI, Tianjin, China Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China
  • Di Jin Eigen AI, Palo Alto, U.S.A
  • Xuewen Yang The Department of Electrical and Computer Engineering, Stony Brook University, New York, U.S.A
  • Tao Li The College of Computer Science, Nankai University, Tianjin, China Haihe Lab of ITAI, Tianjin, China

DOI:

https://doi.org/10.1609/aaai.v40i39.40641

Abstract

The proliferation of large language models (LLMs) has significantly advanced intelligent systems. Unfortunately, LLMs often face knowledge conflicts between internal memory and retrieved external information, arising from misinformation, biases, or outdated knowledge. These conflicts undermine response reliability and introduce uncertainty in decision-making. In this work, we analyze how LLMs navigate knowledge conflicts from an information-theoretic perspective and reveal that when conflicting and supplementary information exhibit significant differences, LLMs confidently resolve their preferences and alleviate the uncertainty during their response generation. When this difference is ambiguous, LLMs experience considerable uncertainty about their generation. Based on this insight, we propose Swin-VIB, a novel framework that integrates a pipeline of variational information bottleneck models to adapt the retrieved information difference, facilitating robust response generation of LLMs even in conflicting contexts. Extensive experiments confirm our theoretical analysis and demonstrate the performance of Swin-VIB. Notably, Swin-VIB outperforms all competitive baselines in terms of the accuracy of the multiple-choice task, while improving the EM values in the open-ended QA task by at least 11.14%.

Downloads

Published

2026-03-14

How to Cite

Wang, J., Xu, Z., Jin, D., Yang, X., & Li, T. (2026). Accommodate Knowledge Conflicts in Retrieval-augmented LLMs: Towards Robust Response Generation in the Wild. Proceedings of the AAAI Conference on Artificial Intelligence, 40(39), 33530–33538. https://doi.org/10.1609/aaai.v40i39.40641

Issue

Section

AAAI Technical Track on Natural Language Processing IV