A Sharper Generalization Bound for Divide-and-Conquer Ridge Regression

Authors

  • Shusen Wang Stevens Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v33i01.33015305

Abstract

We study the distributed machine learning problem where the n feature-response pairs are partitioned among m machines uniformly at random. The goal is to approximately solve an empirical risk minimization (ERM) problem with the minimum amount of communication. The divide-and-conquer (DC) method, which was proposed several years ago, lets every worker machine independently solve the same ERM problem using its local feature-response pairs and the driver machine combine the solutions. This approach is in one-shot and thereby extremely communication-efficient. Although the DC method has been studied by many prior works, reasonable generalization bound has not been established before this work.

For the ridge regression problem, we show that the prediction error of the DC method on unseen test samples is at most ε times larger than the optimal. There have been constantfactor bounds in the prior works, their sample complexities have a quadratic dependence on d, which does not match the setting of most real-world problems. In contrast, our bounds are much stronger. First, our 1 + ε error bound is much better than their constant-factor bounds. Second, our sample complexity is merely linear with d.

Downloads

Published

2019-07-17

How to Cite

Wang, S. (2019). A Sharper Generalization Bound for Divide-and-Conquer Ridge Regression. Proceedings of the AAAI Conference on Artificial Intelligence, 33(01), 5305-5312. https://doi.org/10.1609/aaai.v33i01.33015305

Issue

Section

AAAI Technical Track: Machine Learning