Enhancing Low-Resource Relation Representations through Multi-View Decoupling

Authors

  • Chenghao Fan Cognitive Computing and Intelligent Information Processing (CCIIP) Laboratory, School of Computer Science and Technology, Huazhong University of Science and Technology Joint Laboratory of HUST and Pingan Property & Casualty Research (HPL)
  • Wei Wei Cognitive Computing and Intelligent Information Processing (CCIIP) Laboratory, School of Computer Science and Technology, Huazhong University of Science and Technology Joint Laboratory of HUST and Pingan Property & Casualty Research (HPL)
  • Xiaoye Qu Cognitive Computing and Intelligent Information Processing (CCIIP) Laboratory, School of Computer Science and Technology, Huazhong University of Science and Technology Joint Laboratory of HUST and Pingan Property & Casualty Research (HPL)
  • Zhenyi Lu Cognitive Computing and Intelligent Information Processing (CCIIP) Laboratory, School of Computer Science and Technology, Huazhong University of Science and Technology Joint Laboratory of HUST and Pingan Property & Casualty Research (HPL)
  • Wenfeng Xie Ping An Property & Casualty Insurance Company of China, Ltd
  • Yu Cheng The Chinese University of Hong Kong
  • Dangyang Chen Ping An Property&Casualty insurance company of China, Ltd

DOI:

https://doi.org/10.1609/aaai.v38i16.29752

Keywords:

NLP: Information Extraction, NLP: Text Classification

Abstract

Recently, prompt-tuning with pre-trained language models (PLMs) has demonstrated the significantly enhancing ability of relation extraction (RE) tasks. However, in low-resource scenarios, where the available training data is scarce, previous prompt-based methods may still perform poorly for prompt-based representation learning due to a superficial understanding of the relation. To this end, we highlight the importance of learning high-quality relation representation in low-resource scenarios for RE, and propose a novel prompt-based relation representation method, named MVRE (Multi-View Relation Extraction), to better leverage the capacity of PLMs to improve the performance of RE within the low-resource prompt-tuning paradigm. Specifically, MVRE decouples each relation into different perspectives to encompass multi-view relation representations for maximizing the likelihood during relation inference. Furthermore, we also design a Global-Local loss and a Dynamic-Initialization method for better alignment of the multi-view relation-representing virtual words, containing the semantics of relation labels during the optimization learning process and initialization. Extensive experiments on three benchmark datasets show that our method can achieve state-of-the-art in low-resource settings.

Published

2024-03-24

How to Cite

Fan, C., Wei, W., Qu, X., Lu, Z., Xie, W., Cheng, Y., & Chen, D. (2024). Enhancing Low-Resource Relation Representations through Multi-View Decoupling. Proceedings of the AAAI Conference on Artificial Intelligence, 38(16), 17968–17976. https://doi.org/10.1609/aaai.v38i16.29752

Issue

Section

AAAI Technical Track on Natural Language Processing I