TRF: Learning Kernels with Tuned Random Features

Authors

  • Alistair Shilton Deakin University
  • Sunil Gupta Deakin University, Australia
  • Santu Rana Deakin University, Australia
  • Arun Kumar Venkatesh Deakin University
  • Svetha Venkatesh Deakin University

DOI:

https://doi.org/10.1609/aaai.v36i8.20803

Keywords:

Machine Learning (ML)

Abstract

Random Fourier features (RFF) are a popular set of tools for constructing low-dimensional approximations of translation-invariant kernels, allowing kernel methods to be scaled to big data. Apart from their computational advantages, by working in the spectral domain random Fourier features expose the translation invariant kernel as a density function that may, in principle, be manipulated directly to tune the kernel. In this paper we propose selecting the density function from a reproducing kernel Hilbert space to allow us to search the space of all translation-invariant kernels. Our approach, which we call tuned random features (TRF), achieves this by approximating the density function as the RKHS-norm regularised least-squares best fit to an unknown ``true'' optimal density function, resulting in a RFF formulation where kernel selection is reduced to regularised risk minimisation with a novel regulariser. We derive bounds on the Rademacher complexity for our method showing that our random features approximation method converges to optimal kernel selection in the large N,D limit. Finally, we prove experimental results for a variety of real-world learning problems, demonstrating the performance of our approach compared to comparable methods.

Downloads

Published

2022-06-28

How to Cite

Shilton, A., Gupta, S., Rana, S., Venkatesh, A. K., & Venkatesh, S. (2022). TRF: Learning Kernels with Tuned Random Features. Proceedings of the AAAI Conference on Artificial Intelligence, 36(8), 8286-8294. https://doi.org/10.1609/aaai.v36i8.20803

Issue

Section

AAAI Technical Track on Machine Learning III