IterDE: An Iterative Knowledge Distillation Framework for Knowledge Graph Embeddings

Authors

  • Jiajun Liu Southeast University
  • Peng Wang Southeast University
  • Ziyu Shang Southeast university
  • Chenxiao Wu Southeast University

DOI:

https://doi.org/10.1609/aaai.v37i4.25570

Keywords:

DMKM: Linked Open Data, Knowledge Graphs & KB Completion

Abstract

Knowledge distillation for knowledge graph embedding (KGE) aims to reduce the KGE model size to address the challenges of storage limitations and knowledge reasoning efficiency. However, current work still suffers from the performance drops when compressing a high-dimensional original KGE model to a low-dimensional distillation KGE model. Moreover, most work focuses on the reduction of inference time but ignores the time-consuming training process of distilling KGE models. In this paper, we propose IterDE, a novel knowledge distillation framework for KGEs. First, IterDE introduces an iterative distillation way and enables a KGE model to alternately be a student model and a teacher model during the iterative distillation process. Consequently, knowledge can be transferred in a smooth manner between high-dimensional teacher models and low-dimensional student models, while preserving good KGE performances. Furthermore, in order to optimize the training process, we consider that different optimization objects between hard label loss and soft label loss can affect the efficiency of training, and then we propose a soft-label weighting dynamic adjustment mechanism that can balance the inconsistency of optimization direction between hard and soft label loss by gradually increasing the weighting of soft label loss. Our experimental results demonstrate that IterDE achieves a new state-of-the-art distillation performance for KGEs compared to strong baselines on the link prediction task. Significantly, IterDE can reduce the training time by 50% on average. Finally, more exploratory experiments show that the soft-label weighting dynamic adjustment mechanism and more fine-grained iterations can improve distillation performance.

Downloads

Published

2023-06-26

How to Cite

Liu, J., Wang, P., Shang, Z., & Wu, C. (2023). IterDE: An Iterative Knowledge Distillation Framework for Knowledge Graph Embeddings. Proceedings of the AAAI Conference on Artificial Intelligence, 37(4), 4488-4496. https://doi.org/10.1609/aaai.v37i4.25570

Issue

Section

AAAI Technical Track on Data Mining and Knowledge Management