Improving the Robustness of Knowledge-Grounded Dialogue via Contrastive Learning

Authors

  • Jiaan Wang Soochow University
  • JIanfeng Qu Soochow University
  • Kexin Wang Soochow University
  • Zhixu Li Fudan University
  • Wen Hua The Hong Kong Polytechnic University
  • Ximing Li Jilin University
  • An Liu Soochow University

DOI:

https://doi.org/10.1609/aaai.v38i17.29881

Keywords:

NLP: Conversational AI/Dialog Systems, NLP: Generation

Abstract

Knowledge-grounded dialogue (KGD) learns to generate an informative response based on a given dialogue context and external knowledge (e.g., knowledge graphs; KGs). Recently, the emergence of large language models (LLMs) and pre-training techniques has brought great success to knowledge-grounded dialogue. However, when building KGD systems in real applications, there are various real-world noises that are inevitable to face. For example, the dialogue context might involve perturbations such as misspellings and abbreviations. In addition, KGs typically suffer from incompletion and also might contain erroneous and outdated facts. Such real-world noises pose a challenge to the robustness of KGD systems and hinder their applications in the real world. In this paper, we propose an entity-based contrastive learning framework for improving the robustness of KGD. Specifically, we make use of the entity information in a KGD sample to create both its positive and negative samples which involve semantic-irrelevant and semantic-relevant perturbations, respectively. The contrastive learning framework ensures the KGD model is aware of these two types of perturbations, thus could generate informative responses with the potentially noisy inputs in real applications. Experimental results on three widely-used benchmark datasets show that our method achieves new state-of-the-art performance in terms of automatic evaluation scores, verifying its effectiveness and potentiality. Furthermore, we show that our method is able to generate better responses than comparison models in both the noisy and the few-shot settings.

Published

2024-03-24

How to Cite

Wang, J., Qu, J., Wang, K., Li, Z., Hua, W., Li, X., & Liu, A. (2024). Improving the Robustness of Knowledge-Grounded Dialogue via Contrastive Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 19135-19143. https://doi.org/10.1609/aaai.v38i17.29881

Issue

Section

AAAI Technical Track on Natural Language Processing II