Siamese Recurrent Architectures for Learning Sentence Similarity

Authors

  • Jonas Mueller Massachusetts Institute of Technology
  • Aditya Thyagarajan M. S. Ramaiah Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v30i1.10350

Keywords:

neural network, semantic similarity, sentence representation, long short-term memory

Abstract

We present a siamese adaptation of the Long Short-Term Memory (LSTM) network for labeled data comprised of pairs of variable-length sequences. Our model is applied to assess semantic similarity between sentences, where we exceed state of the art, outperforming carefully handcrafted features and recently proposed neural network systems of greater complexity. For these applications, we provide word-embedding vectors supplemented with synonymic information to the LSTMs, which use a fixed size vector to encode the underlying meaning expressed in a sentence (irrespective of the particular wording/syntax). By restricting subsequent operations to rely on a simple Manhattan metric, we compel the sentence representations learned by our model to form a highly structured space whose geometry reflects complex semantic relationships. Our results are the latest in a line of findings that showcase LSTMs as powerful language models capable of tasks requiring intricate understanding.

Downloads

Published

2016-03-05

How to Cite

Mueller, J., & Thyagarajan, A. (2016). Siamese Recurrent Architectures for Learning Sentence Similarity. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10350

Issue

Section

Technical Papers: NLP and Machine Learning