Improving Factual Error Correction by Learning to Inject Factual Errors

Authors

  • Xingwei He The University of Hong Kong, Hong Kong, China
  • Qianru Zhang The University of Hong Kong, Hong Kong, China
  • A-Long Jin The University of Hong Kong, Hong Kong, China
  • Jun Ma The University of Hong Kong, Hong Kong, China
  • Yuan Yuan School of Computer Science and Engineering, Beihang University, Beijing, China State Key Laboratory of Software, Development Environment Zhongguancun Laboratory
  • Siu Ming Yiu The University of Hong Kong, Hong Kong, China

DOI:

https://doi.org/10.1609/aaai.v38i16.29778

Keywords:

NLP: Applications, NLP: Generation

Abstract

Factual error correction (FEC) aims to revise factual errors in false claims with minimal editing, making them faithful to the provided evidence. This task is crucial for alleviating the hallucination problem encountered by large language models. Given the lack of paired data (i.e., false claims and their corresponding correct claims), existing methods typically adopt the ‘mask-then-correct’ paradigm. This paradigm relies solely on unpaired false claims and correct claims, thus being referred to as distantly supervised methods. These methods require a masker to explicitly identify factual errors within false claims before revising with a corrector. However, the absence of paired data to train the masker makes accurately pinpointing factual errors within claims challenging. To mitigate this, we propose to improve FEC by Learning to Inject Factual Errors (LIFE), a three-step distantly supervised method: ‘mask-corrupt-correct’. Specifically, we first train a corruptor using the ‘mask-then-corrupt’ procedure, allowing it to deliberately introduce factual errors into correct text. The corruptor is then applied to correct claims, generating a substantial amount of paired data. After that, we filter out low-quality data, and use the remaining data to train a corrector. Notably, our corrector does not require a masker, thus circumventing the bottleneck associated with explicit factual error identification. Our experiments on a public dataset verify the effectiveness of LIFE in two key aspects: Firstly, it outperforms the previous best-performing distantly supervised method by a notable margin of 10.59 points in SARI Final (19.3% improvement). Secondly, even compared to ChatGPT prompted with in-context examples, LIFE achieves a superiority of 7.16 points in SARI Final.

Published

2024-03-24

How to Cite

He, X., Zhang, Q., Jin, A.-L., Ma, J., Yuan, Y., & Yiu, S. M. (2024). Improving Factual Error Correction by Learning to Inject Factual Errors. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 18197-18205. https://doi.org/10.1609/aaai.v38i16.29778

Issue

Section

AAAI Technical Track on Natural Language Processing I