Do Subsampled Newton Methods Work for High-Dimensional Data?

Authors

  • Xiang Li Peking University
  • Shusen Wang Stevens Institute of Technology
  • Zhihua Zhang Peking University

DOI:

https://doi.org/10.1609/aaai.v34i04.5905

Abstract

Subsampled Newton methods approximate Hessian matrices through subsampling techniques to alleviate the per-iteration cost. Previous results require Ω (d) samples to approximate Hessians, where d is the dimension of data points, making it less practical for high-dimensional data. The situation is deteriorated when d is comparably as large as the number of data points n, which requires to take the whole dataset into account, making subsampling not useful. This paper theoretically justifies the effectiveness of subsampled Newton methods on strongly convex empirical risk minimization with high dimensional data. Specifically, we provably require only Θ˜(deffγ) samples for approximating the Hessian matrices, where deffγ is the γ-ridge leverage and can be much smaller than d as long as ≫ 1. Our theories work for three types of Newton methods: subsampled Netwon, distributed Newton, and proximal Newton.

Downloads

Published

2020-04-03

How to Cite

Li, X., Wang, S., & Zhang, Z. (2020). Do Subsampled Newton Methods Work for High-Dimensional Data?. Proceedings of the AAAI Conference on Artificial Intelligence, 34(04), 4723-4730. https://doi.org/10.1609/aaai.v34i04.5905

Issue

Section

AAAI Technical Track: Machine Learning