Relation Also Knows: Rethinking the Recall and Editing of Factual Associations in Auto-Regressive Transformer Language Models

Authors

  • Xiyu Liu Institute of Information Engineering, Chinese Academy of Sciences School of Cyber Security, University of Chinese Academy of Sciences
  • Zhengxiao Liu Institute of Information Engineering, Chinese Academy of Sciences School of Cyber Security, University of Chinese Academy of Sciences
  • Naibin Gu Institute of Information Engineering, Chinese Academy of Sciences School of Cyber Security, University of Chinese Academy of Sciences
  • Zheng Lin Institute of Information Engineering, Chinese Academy of Sciences School of Cyber Security, University of Chinese Academy of Sciences
  • Wanli Ma University of Electronic Science and Technology of China
  • Ji Xiang Institute of Information Engineering, Chinese Academy of Sciences
  • Weiping Wang Institute of Information Engineering, Chinese Academy of Sciences

DOI:

https://doi.org/10.1609/aaai.v39i23.34646

Abstract

The storage and recall of factual associations in auto-regressive transformer language models (LMs) have drawn a great deal of attention, inspiring knowledge editing by directly modifying the located model weights. Most editing works achieve knowledge editing under the guidance of existing interpretations of knowledge recall that mainly focus on subject knowledge. However, these interpretations are seriously flawed, neglecting relation information and leading to the *over-generalizing* problem for editing. In this work, we discover a novel relation-focused perspective to interpret the knowledge recall of transformer LMs during inference and apply it on single knowledge editing to avoid over-generalizing. Experimental results on the dataset supplemented with a new R-Specificity criterion demonstrate that our editing approach significantly alleviates over-generalizing while remaining competitive on other criteria, breaking the domination of subject-focused editing for future research.

Downloads

Published

2025-04-11

How to Cite

Liu, X., Liu, Z., Gu, N., Lin, Z., Ma, W., Xiang, J., & Wang, W. (2025). Relation Also Knows: Rethinking the Recall and Editing of Factual Associations in Auto-Regressive Transformer Language Models. Proceedings of the AAAI Conference on Artificial Intelligence, 39(23), 24659–24667. https://doi.org/10.1609/aaai.v39i23.34646

Issue

Section

AAAI Technical Track on Natural Language Processing II