Algorithms for Generalized Topic Modeling

Authors

  • Avrim Blum Toyota Technological Institute at Chicago
  • Nika Haghtalab Carnegie Mellon University

Keywords:

Topic Modeling, Cotraining, Multi-view learning

Abstract

Recently there has been significant activity in developing algorithms with provable guarantees for topic modeling. In this work we consider a broad generalization of the traditional topic modeling framework, where we no longer assume that words are drawn i.i.d. and instead view a topic as a complex distribution over sequences of paragraphs. Since one could not hope to even represent such a distribution in general (even if paragraphs are given using some natural feature representation), we aim instead to directly learn a predictor that given a new document, accurately predicts its topic mixture, without learning the distributions explicitly. We present several natural conditions under which one can do this from unlabeled data only, and give efficient algorithms to do so, also discussing issues such as noise tolerance and sample complexity. More generally, our model can be viewed as a generalization of the multi-view or co-training setting in machine learning.

Downloads

Published

2018-04-29

How to Cite

Blum, A., & Haghtalab, N. (2018). Algorithms for Generalized Topic Modeling. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/11825