Deep Semi-Random Features for Nonlinear Function Approximation

Authors

  • Kenji Kawaguchi Massachusetts Institute of Technology
  • Bo Xie Georgia Institute of Technology
  • Le Song Georgia Institute of Technology

DOI:

https://doi.org/10.1609/aaai.v32i1.11750

Keywords:

Kernel Methods, Neural Networks, Random Features

Abstract

We propose semi-random features for nonlinear function approximation. The flexibility of semi-random feature lies between the fully adjustable units in deep learning and the random features used in kernel methods. For one hidden layer models with semi-random features, we prove with no unrealistic assumptions that the model classes contain an arbitrarily good function as the width increases (universality), and despite non-convexity, we can find such a good function (optimization theory) that generalizes to unseen new data (generalization bound). For deep models, with no unrealistic assumptions, we prove universal approximation ability, a lower bound on approximation error, a partial optimization guarantee, and a generalization bound. Depending on the problems, the generalization bound of deep semi-random features can be exponentially better than the known bounds of deep ReLU nets; our generalization error bound can be independent of the depth, the number of trainable weights as well as the input dimensionality. In experiments, we show that semi-random features can match the performance of neural networks by using slightly more units, and it outperforms random features by using significantly fewer units. Moreover, we introduce a new implicit ensemble method by using semi-random features.

Downloads

Published

2018-04-29

How to Cite

Kawaguchi, K., Xie, B., & Song, L. (2018). Deep Semi-Random Features for Nonlinear Function Approximation. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11750