Improving the Efficiency and Effectiveness for BERT-based Entity Resolution

Authors

  • Bing Li University of New South Wales, Australia
  • Yukai Miao University of New South Wales, Australia
  • Yaoshu Wang Shenzhen Institute of Computing Sciences, Shenzhen University, China
  • Yifang Sun School of Computer Science and Engineering, Northeastern University, China
  • Wei Wang Dongguan University of Technology, China University of New South Wales, Australia

Keywords:

Information Extraction, Text Classification & Sentiment Analysis

Abstract

BERT has set a new state-of-the-art performance on entity resolution (ER) task, largely owed to fine-tuning pre-trained language models and the deep pair-wise interaction. Albeit being remarkably effective, it comes with a steep increase in computational cost, as the deep-interaction requires to exhaustively compute every tuple pair to search for co-references. For ER task, it is often prohibitively expensive due to the large cardinality to be matched. To tackle this, we introduce a siamese network structure that independently encodes tuples using BERT but delays the pair-wise interaction via an enhanced alignment network. This siamese structure enables a dedicated blocking module to quickly filter out obviously dissimilar tuple pairs, and thus drastically reduces the cardinality of fine-grained matching. Further, the blocking and entity matching are integrated into a multi-task learning framework for facilitating both tasks. Extensive experiments on multiple datasets demonstrate that our model significantly outperforms state-of-the-art models (including BERT) in both efficiency and effectiveness.

Downloads

Published

2021-05-18

How to Cite

Li, B., Miao, Y., Wang, Y., Sun, Y., & Wang, W. (2021). Improving the Efficiency and Effectiveness for BERT-based Entity Resolution. Proceedings of the AAAI Conference on Artificial Intelligence, 35(15), 13226-13233. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17562

Issue

Section

AAAI Technical Track on Speech and Natural Language Processing II