CA-RNN: Using Context-Aligned Recurrent Neural Networks for Modeling Sentence Similarity

Authors

  • Qin Chen East China Normal University
  • Qinmin Hu East China Normal University
  • Jimmy Xiangji Huang York University
  • Liang He East China Normal University

DOI:

https://doi.org/10.1609/aaai.v32i1.11273

Keywords:

context alignment gating, context-aligned recurrent neural networks, sentence similarity modeling

Abstract

The recurrent neural networks (RNNs) have shown good performance for sentence similarity modeling in recent years. Most RNNs focus on modeling the hidden states based on the current sentence, while the context information from the other sentence is not well investigated during the hidden state generation. In this paper, we propose a context-aligned RNN (CA-RNN) model, which incorporates the contextual information of the aligned words in a sentence pair for the inner hidden state generation. Specifically, we first perform word alignment detection to identify the aligned words in the two sentences. Then, we present a context alignment gating mechanism and embed it into our model to automatically absorb the aligned words' context for the hidden state update. Experiments on three benchmark datasets, namely TREC-QA and WikiQA for answer selection and MSRP for paraphrase identification, show the great advantages of our proposed model. In particular, we achieve the new state-of-the-art performance on TREC-QA and WikiQA. Furthermore, our model is comparable to if not better than the recent neural network based approaches on MSRP.

Downloads

Published

2018-04-25

How to Cite

Chen, Q., Hu, Q., Huang, J. X., & He, L. (2018). CA-RNN: Using Context-Aligned Recurrent Neural Networks for Modeling Sentence Similarity. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11273