Editing Language Model-Based Knowledge Graph Embeddings
DOI:
https://doi.org/10.1609/aaai.v38i16.29737Keywords:
NLP: Information Extraction, NLP: ApplicationsAbstract
Recently decades have witnessed the empirical success of framing Knowledge Graph (KG) embeddings via language models. However, language model-based KG embeddings are usually deployed as static artifacts, making them difficult to modify post-deployment without re-training after deployment. To address this issue, we propose a new task of editing language model-based KG embeddings in this paper. This task is designed to facilitate rapid, data-efficient updates to KG embeddings without compromising the performance of other aspects. We build four new datasets: E-FB15k237, A-FB15k237, E-WN18RR, and A-WN18RR, and evaluate several knowledge editing baselines demonstrating the limited ability of previous models to handle the proposed challenging task. We further propose a simple yet strong baseline dubbed KGEditor, which utilizes additional parametric layers of the hypernetwork to edit/add facts. Our comprehensive experimental results reveal that KGEditor excels in updating specific facts without impacting the overall performance, even when faced with limited training resources. Code and datasets will be available at https://github.com/AnonymousForPapers/DeltaKG.Downloads
Published
2024-03-24
How to Cite
Cheng, S., Zhang, N., Tian, B., Chen, X., Liu, Q., & Chen, H. (2024). Editing Language Model-Based Knowledge Graph Embeddings. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 17835-17843. https://doi.org/10.1609/aaai.v38i16.29737
Issue
Section
AAAI Technical Track on Natural Language Processing I