Progressive Distillation Based on Masked Generation Feature Method for Knowledge Graph Completion

Authors

  • Cunhang Fan Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University
  • Yujie Chen Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University
  • Jun Xue Anhui Province Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University
  • Yonghui Kong Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University
  • Jianhua Tao Department of Automation, Tsinghua University Beijing National Research Center for lnformation Science and Technology, Tsinghua University
  • Zhao Lv Anhui Provincial Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University

DOI:

https://doi.org/10.1609/aaai.v38i8.28680

Keywords:

DMKM: Linked Open Data, Knowledge Graphs & KB Completio, DMKM: Semantic Web, NLP: Other

Abstract

In recent years, knowledge graph completion (KGC) models based on pre-trained language model (PLM) have shown promising results. However, the large number of parameters and high computational cost of PLM models pose challenges for their application in downstream tasks. This paper proposes a progressive distillation method based on masked generation features for KGC task, aiming to significantly reduce the complexity of pre-trained models. Specifically, we perform pre-distillation on PLM to obtain high-quality teacher models, and compress the PLM network to obtain multi-grade student models. However, traditional feature distillation suffers from the limitation of having a single representation of information in teacher models. To solve this problem, we propose masked generation of teacher-student features, which contain richer representation information. Furthermore, there is a significant gap in representation ability between teacher and student. Therefore, we design a progressive distillation method to distill student models at each grade level, enabling efficient knowledge transfer from teachers to students. The experimental results demonstrate that the model in the pre-distillation stage surpasses the existing state-of-the-art methods. Furthermore, in the progressive distillation stage, the model significantly reduces the model parameters while maintaining a certain level of performance. Specifically, the model parameters of the lower-grade student model are reduced by 56.7\% compared to the baseline.

Published

2024-03-24

How to Cite

Fan, C., Chen, Y., Xue, J., Kong, Y., Tao, J., & Lv, Z. (2024). Progressive Distillation Based on Masked Generation Feature Method for Knowledge Graph Completion. Proceedings of the AAAI Conference on Artificial Intelligence, 38(8), 8380-8388. https://doi.org/10.1609/aaai.v38i8.28680

Issue

Section

AAAI Technical Track on Data Mining & Knowledge Management