Efficient Online Learning for Large-Scale Sparse Kernel Logistic Regression

Authors

  • Lijun Zhang Zhejiang University
  • Rong Jin Michigan State University
  • Chun Chen Zhejiang University
  • Jiajun Bu Zhejiang University
  • Xiaofei He Zhejiang University

DOI:

https://doi.org/10.1609/aaai.v26i1.8300

Keywords:

Sparse kernel logistic regression

Abstract

In this paper, we study the problem of large-scale Kernel Logistic Regression (KLR). A straightforward approach is to apply stochastic approximation to KLR. We refer to this approach as non-conservative online learning algorithm because it updates the kernel classifier after every received training example, leading to a dense classifier. To improve the sparsity of the KLR classifier, we propose two conservative online learning algorithms that update the classifier in a stochastic manner and generate sparse solutions. With appropriately designed updating strategies, our analysis shows that the two conservative algorithms enjoy similar theoretical guarantee as that of the non-conservative algorithm. Empirical studies on several benchmark data sets demonstrate that compared to batch-mode algorithms for KLR, the proposed conservative online learning algorithms are able to produce sparse KLR classifiers, and achieve similar classification accuracy but with significantly shorter training time. Furthermore, both the sparsity and classification accuracy of our methods are comparable to those of the online kernel SVM.

Downloads

Published

2021-09-20

How to Cite

Zhang, L., Jin, R., Chen, C., Bu, J., & He, X. (2021). Efficient Online Learning for Large-Scale Sparse Kernel Logistic Regression. Proceedings of the AAAI Conference on Artificial Intelligence, 26(1), 1219-1225. https://doi.org/10.1609/aaai.v26i1.8300

Issue

Section

AAAI Technical Track: Machine Learning