History Matters: Temporal Knowledge Editing in Large Language Model

Authors

  • Xunjian Yin Peking University
  • Jin Jiang Peking University
  • Liming Yang Tsinghua University
  • Xiaojun Wan Peking University

DOI:

https://doi.org/10.1609/aaai.v38i17.29912

Keywords:

NLP: (Large) Language Models, NLP: Interpretability, Analysis, and Evaluation of NLP Models, KRR: Applications, General

Abstract

The imperative task of revising or updating the knowledge stored within large language models arises from two distinct sources: intrinsic errors inherent in the model which should be corrected and outdated knowledge due to external shifts in the real world which should be updated. Prevailing efforts in model editing conflate these two distinct categories of edits arising from distinct reasons and directly modify the original knowledge in models into new knowledge. However, we argue that preserving the model's original knowledge remains pertinent. Specifically, if a model's knowledge becomes outdated due to evolving worldly dynamics, it should retain recollection of the historical knowledge while integrating the newfound knowledge. In this work, we introduce the task of Temporal Knowledge Editing (TKE) and establish a benchmark AToKe (Assessment of TempOral Knowledge Editing) to evaluate current model editing methods. We find that while existing model editing methods are effective at making models remember new knowledge, the edited model catastrophically forgets historical knowledge. To address this gap, we propose a simple and general framework termed Multi-Editing with Time Objective (METO) for enhancing existing editing models, which edits both historical and new knowledge concurrently and optimizes the model's prediction for the time of each fact. Our assessments demonstrate that while AToKe is still difficult, METO maintains the effectiveness of learning new knowledge and meanwhile substantially improves the performance of edited models on utilizing historical knowledge.

Published

2024-03-24

How to Cite

Yin, X., Jiang, J., Yang, L., & Wan, X. (2024). History Matters: Temporal Knowledge Editing in Large Language Model. Proceedings of the AAAI Conference on Artificial Intelligence, 38(17), 19413-19421. https://doi.org/10.1609/aaai.v38i17.29912

Issue

Section

AAAI Technical Track on Natural Language Processing II