Sample-adaptive Multiple Kernel Learning

Authors

  • Xinwang Liu National University of Defense Technology, Changsha
  • Lei Wang University of Wollongong
  • Jian Zhang University of Technology Sydney
  • Jianping Yin National University Defense Technology, Changsha

DOI:

https://doi.org/10.1609/aaai.v28i1.8983

Keywords:

Kernel Methods, Max-Margin, Latent Variables Learning

Abstract

Existing multiple kernel learning (MKL) algorithms \textit{indiscriminately} apply a same set of kernel combination weights to all samples. However, the utility of base kernels could vary across samples and a base kernel useful for one sample could become noisy for another. In this case, rigidly applying a same set of kernel combination weights could adversely affect the learning performance. To improve this situation, we propose a sample-adaptive MKL algorithm, in which base kernels are allowed to be adaptively switched on/off with respect to each sample. We achieve this goal by assigning a latent binary variable to each base kernel when it is applied to a sample. The kernel combination weights and the latent variables are jointly optimized via margin maximization principle. As demonstrated on five benchmark data sets, the proposed algorithm consistently outperforms the comparable ones in the literature.

Downloads

Published

2014-06-21

How to Cite

Liu, X., Wang, L., Zhang, J., & Yin, J. (2014). Sample-adaptive Multiple Kernel Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1). https://doi.org/10.1609/aaai.v28i1.8983

Issue

Section

Main Track: Novel Machine Learning Algorithms