P-SIF: Document Embeddings Using Partition Averaging


  • Vivek Gupta University of Utah
  • Ankit Saw InfoEdge (India) Ltd.
  • Pegah Nokhiz University of Utah
  • Praneeth Netrapalli Microsoft Research Lab, Bangalore
  • Piyush Rai Computer Science Department, IIT Kanpur
  • Partha Talukdar Indian Institute of Science, Bangalore




Simple weighted averaging of word vectors often yields effective representations for sentences which outperform sophisticated seq2seq neural models in many tasks. While it is desirable to use the same method to represent documents as well, unfortunately, the effectiveness is lost when representing long documents involving multiple sentences. One of the key reasons is that a longer document is likely to contain words from many different topics; hence, creating a single vector while ignoring all the topical structure is unlikely to yield an effective document representation. This problem is less acute in single sentences and other short text fragments where the presence of a single topic is most likely. To alleviate this problem, we present P-SIF, a partitioned word averaging model to represent long documents. P-SIF retains the simplicity of simple weighted word averaging while taking a document's topical structure into account. In particular, P-SIF learns topic-specific vectors from a document and finally concatenates them all to represent the overall document. We provide theoretical justifications on the correctness of P-SIF. Through a comprehensive set of experiments, we demonstrate P-SIF's effectiveness compared to simple weighted averaging and many other baselines.




How to Cite

Gupta, V., Saw, A., Nokhiz, P., Netrapalli, P., Rai, P., & Talukdar, P. (2020). P-SIF: Document Embeddings Using Partition Averaging. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05), 7863-7870. https://doi.org/10.1609/aaai.v34i05.6292



AAAI Technical Track: Natural Language Processing