Mitigating Large Language Model Hallucinations via Autonomous Knowledge Graph-Based Retrofitting
DOI:
https://doi.org/10.1609/aaai.v38i16.29770Keywords:
NLP: (Large) Language ModelsAbstract
Incorporating factual knowledge in knowledge graph is regarded as a promising approach for mitigating the hallucination of large language models (LLMs). Existing methods usually only use the user's input to query the knowledge graph, thus failing to address the factual hallucination generated by LLMs during its reasoning process. To address this problem, this paper proposes Knowledge Graph-based Retrofitting (KGR), a new framework that incorporates LLMs with KGs to mitigate factual hallucination during the reasoning process by retrofitting the initial draft responses of LLMs based on the factual knowledge stored in KGs. Specifically, KGR leverages LLMs to extract, select, validate, and retrofit factual statements within the model-generated responses, which enables an autonomous knowledge verifying and refining procedure without any additional manual efforts. Experiments show that KGR can significantly improve the performance of LLMs on factual QA benchmarks especially when involving complex reasoning processes, which demonstrates the necessity and effectiveness of KGR in mitigating hallucination and enhancing the reliability of LLMs.Downloads
Published
2024-03-24
How to Cite
Guan, X., Liu, Y., Lin, H., Lu, Y., He, B., Han, X., & Sun, L. (2024). Mitigating Large Language Model Hallucinations via Autonomous Knowledge Graph-Based Retrofitting. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 18126-18134. https://doi.org/10.1609/aaai.v38i16.29770
Issue
Section
AAAI Technical Track on Natural Language Processing I