A Reduction of the Elastic Net to Support Vector Machines with an Application to GPU Computing

Authors

  • Quan Zhou Tsinghua University
  • Wenlin Chen Washington University in St. Louis
  • Shiji Song Tsinghua University
  • Jacob Gardner Washington University in St. Louis
  • Kilian Weinberger Washington University in St. Louis
  • Yixin Chen Washington University in St. Louis

DOI:

https://doi.org/10.1609/aaai.v29i1.9625

Keywords:

Elastic Net, SVM, Reduction, Sparsity, Parallel computation

Abstract

Algorithmic reductions are one of the corner stones of theoretical computer science. Surprisingly, to-date, they have only played a limited role in machine learning. In this paper we introduce a formal and practical reduction between two of the most widely used machine learning algorithms: from the Elastic Net (and the Lasso as a special case) to the Support Vector Machine. First, we derive the reduction and summarize it in only 11 lines of MATLAB. Then, we demonstrate its high impact potential by translating recent advances in parallelizing SVM solvers directly to the Elastic Net. The resulting algorithm is a parallel solver for the Elastic Net (and Lasso) that naturally utilizes GPU and multi-core CPUs. We evaluate it on twelve real world data sets, and show that it yields identical results as the popular (and highly optimized) glmnet implementation but is up-to two orders of magnitude faster.

Downloads

Published

2015-02-21

How to Cite

Zhou, Q., Chen, W., Song, S., Gardner, J., Weinberger, K., & Chen, Y. (2015). A Reduction of the Elastic Net to Support Vector Machines with an Application to GPU Computing. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9625

Issue

Section

Main Track: Novel Machine Learning Algorithms