SCOPE: Scalable Composite Optimization for Learning on Spark

Authors

  • Shen-Yi Zhao Nanjing University
  • Ru Xiang Nanjing University
  • Ying-Hao Shi Nanjing University
  • Peng Gao Nanjing University
  • Wu-Jun Li Nanjing University

DOI:

https://doi.org/10.1609/aaai.v31i1.10920

Abstract

Many machine learning models, such as logistic regression (LR) and support vector machine (SVM), can be formulated as composite optimization problems. Recently, many distributed stochastic optimization (DSO) methods have been proposed to solve the large-scale composite optimization problems, which have shown better performance than traditional batch methods. However, most of these DSO methods might not be scalable enough. In this paper, we propose a novel DSO method, called scalable composite optimization for learning (SCOPE), and implement it on the fault-tolerant distributed platform Spark. SCOPE is both computation-efficient and communication-efficient. Theoretical analysis shows that SCOPE is convergent with linear convergence rate when the objective function is strongly convex. Furthermore, empirical results on real datasets show that SCOPE can outperform other state-of-the-art distributed learning methods on Spark, including both batch learning methods and DSO methods.

Downloads

Published

2017-02-13

How to Cite

Zhao, S.-Y., Xiang, R., Shi, Y.-H., Gao, P., & Li, W.-J. (2017). SCOPE: Scalable Composite Optimization for Learning on Spark. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10920