Dual-Clustering Maximum Entropy with Application to Classification and Word Embedding

Authors

  • Xiaolong Wang University of Illinois
  • Jingjing Wang University of Illinois
  • Chengxiang Zhai University of Illinois

DOI:

https://doi.org/10.1609/aaai.v31i1.10991

Keywords:

Maximum Entropy, Dual Clustering

Abstract

Maximum Entropy (ME), as a general-purpose machine learning model, has been successfully applied to various fields such as text mining and natural language processing. It has been used as a classification technique and recently also applied to learn word embedding. ME establishes a distribution of the exponential form over items (classes/words). When training such a model, learning efficiency is guaranteed by globally updating the entire set of model parameters associated with all items at each training instance. This creates a significant computational challenge when the number of items is large. To achieve learning efficiency with affordable computational cost, we propose an approach named Dual-Clustering Maximum Entropy (DCME). Exploiting the primal-dual form of ME, it conducts clustering in the dual space and approximates each dual distribution by the corresponding cluster center. This naturally enables a hybrid online-offline optimization algorithm whose time complexity per instance only scales as the product of the feature/word vector dimensionality and the cluster number. Experimental studies on text classification and word embedding learning demonstrate that DCME effectively strikes a balance between training speed and model quality, substantially outperforming state-of-the-art methods.

Downloads

Published

2017-02-12

How to Cite

Wang, X., Wang, J., & Zhai, C. (2017). Dual-Clustering Maximum Entropy with Application to Classification and Word Embedding. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10991