Asynchronous Doubly Stochastic Sparse Kernel Learning

Authors

  • Bin Gu University of Pittsburgh
  • Miao Xin University of Texas at Arlington
  • Zhouyuan Huo University of Pittsburgh
  • Heng Huang University of Pittsburgh

Keywords:

Kernel Learning, Sparse Kernel, Asynchronous Doubly Stochastic Algorithm

Abstract

Kernel methods have achieved tremendous success in the past two decades. In the current big data era, data collection has grown tremendously. However, existing kernel methods are not scalable enough both at the training and predicting steps. To address this challenge, in this paper, we first introduce a general sparse kernel learning formulation based on the random feature approximation, where the loss functions are possibly non-convex. Then we propose a new asynchronous parallel doubly stochastic algorithm for large scale sparse kernel learning (AsyDSSKL). To the best our knowledge, AsyDSSKL is the first algorithm with the techniques of asynchronous parallel computation and doubly stochastic optimization. We also provide a comprehensive convergence guarantee to AsyDSSKL. Importantly, the experimental results on various large-scale real-world datasets show that, our AsyDSSKL method has the significant superiority on the computational efficiency at the training and predicting steps over the existing kernel methods.

Downloads

Published

2018-04-29

How to Cite

Gu, B., Xin, M., Huo, Z., & Huang, H. (2018). Asynchronous Doubly Stochastic Sparse Kernel Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/11803