TY - JOUR AU - Barkan, Oren AU - Razin, Noam AU - Malkiel, Itzik AU - Katz, Ori AU - Caciularu, Avi AU - Koenigstein, Noam PY - 2020/04/03 Y2 - 2024/03/28 TI - Scalable Attentive Sentence Pair Modeling via Distilled Sentence Embedding JF - Proceedings of the AAAI Conference on Artificial Intelligence JA - AAAI VL - 34 IS - 04 SE - AAAI Technical Track: Machine Learning DO - 10.1609/aaai.v34i04.5722 UR - https://ojs.aaai.org/index.php/AAAI/article/view/5722 SP - 3235-3242 AB - <p>Recent state-of-the-art natural language understanding models, such as BERT and XLNet, score a pair of sentences (<em>A</em> and <em>B</em>) using multiple <em>cross-attention</em> operations – a process in which each word in sentence <em>A</em> attends to all words in sentence <em>B</em> and vice versa. As a result, computing the similarity between a query sentence and a set of candidate sentences, requires the propagation of all query-candidate sentence-pairs throughout a stack of cross-attention layers. This exhaustive process becomes computationally prohibitive when the number of candidate sentences is large. In contrast, sentence embedding techniques learn a sentence-to-vector mapping and compute the similarity between the sentence vectors via simple elementary operations. In this paper, we introduce Distilled Sentence Embedding (DSE) – a model that is based on knowledge distillation from cross-attentive models, focusing on sentence-pair tasks. The outline of DSE is as follows: Given a cross-attentive <em>teacher</em> model (e.g. a fine-tuned BERT), we train a sentence embedding based <em>student</em> model to reconstruct the sentence-pair scores obtained by the teacher model. We empirically demonstrate the effectiveness of DSE on five GLUE sentence-pair tasks. DSE significantly outperforms several ELMO variants and other sentence embedding methods, while accelerating computation of the query-candidate sentence-pairs similarities by several orders of magnitude, with an average relative degradation of 4.6% compared to BERT. Furthermore, we show that DSE produces sentence embeddings that reach state-of-the-art performance on universal sentence representation benchmarks. Our code is made publicly available at https://github.com/microsoft/Distilled-Sentence-Embedding.</p> ER -