Linear-Time Learning on Distributions with Approximate Kernel Embeddings

Authors

  • Danica J. Sutherland Carnegie Mellon University
  • Junier Oliva Carnegie Mellon University
  • Barnabás Póczos Carnegie Mellon University
  • Jeff Schneider Carnegie Mellon University

DOI:

https://doi.org/10.1609/aaai.v30i1.10308

Keywords:

nonparametric estimation, approximate kernel embedding, learning on distributions

Abstract

Many interesting machine learning problems are best posed by considering instances that are distributions, or sample sets drawn from distributions. Most previous work devoted to machine learning tasks with distributional inputs has done so through pairwise kernel evaluations between pdfs (or sample sets). While such an approach is fine for smaller datasets, the computation of an N × N Gram matrix is prohibitive in large datasets. Recent scalable estimators that work over pdfs have done so only with kernels that use Euclidean metrics, like the L2 distance. However, there are a myriad of other useful metrics available, such as total variation, Hellinger distance, and the Jensen-Shannon divergence. This work develops the first random features for pdfs whose dot product approximates kernels using these non-Euclidean metrics. These random features allow estimators to scale to large datasets by working in a primal space, without computing large Gram matrices. We provide an analysis of the approximation error in using our proposed random features, and show empirically the quality of our approximation both in estimating a Gram matrix and in solving learning tasks in real-world and synthetic data.

Downloads

Published

2016-03-02

How to Cite

Sutherland, D., Oliva, J., Póczos, B., & Schneider, J. (2016). Linear-Time Learning on Distributions with Approximate Kernel Embeddings. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10308

Issue

Section

Technical Papers: Machine Learning Methods