Document Informed Neural Autoregressive Topic Models with Distributional Prior

Authors

  • Pankaj Gupta University of Munich
  • Yatin Chaudhary Siemens
  • Florian Buettner Siemens
  • Hinrich Schütze University of Munich

DOI:

https://doi.org/10.1609/aaai.v33i01.33016505

Abstract

We address two challenges in topic models: (1) Context information around words helps in determining their actual meaning, e.g., “networks” used in the contexts artificial neural networks vs. biological neuron networks. Generative topic models infer topic-word distributions, taking no or only little context into account. Here, we extend a neural autoregressive topic model to exploit the full context information around words in a document in a language modeling fashion. The proposed model is named as iDocNADE. (2) Due to the small number of word occurrences (i.e., lack of context) in short text and data sparsity in a corpus of few documents, the application of topic models is challenging on such texts. Therefore, we propose a simple and efficient way of incorporating external knowledge into neural autoregressive topic models: we use embeddings as a distributional prior. The proposed variants are named as DocNADEe and iDocNADEe. We present novel neural autoregressive topic model variants that consistently outperform state-of-the-art generative topic models in terms of generalization, interpretability (topic coherence) and applicability (retrieval and classification) over 7 long-text and 8 short-text datasets from diverse domains.

Downloads

Published

2019-07-17

How to Cite

Gupta, P., Chaudhary, Y., Buettner, F., & Schütze, H. (2019). Document Informed Neural Autoregressive Topic Models with Distributional Prior. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 6505-6512. https://doi.org/10.1609/aaai.v33i01.33016505

Issue

Section

AAAI Technical Track: Natural Language Processing