Smooth Optimization for Effective Multiple Kernel Learning

Authors

  • Zenglin Xu Saarland University and MPI Informatics
  • Rong Jin Michigan State University
  • Shenghuo Zhu NEC Laboratories America
  • Michael Lyu The Chinese University of Hong Kong
  • Irwin King The Chinese University of Hong Kong

DOI:

https://doi.org/10.1609/aaai.v24i1.7675

Keywords:

Multiple Kernel Learning, Smooth Optimization, Classification

Abstract

Multiple Kernel Learning (MKL) can be formulated as a convex-concave minmax optimization problem, whose saddle point corresponds to the optimal solution to MKL. Most MKL methods employ the L1-norm simplex constraints on the combination weights of kernels, which therefore involves optimization of a non-smooth function of the kernel weights. These methods usually divide the optimization into two cycles: one cycle deals with the optimization on the kernel combination weights, and the other cycle updates the parameters of SVM. Despite the success of their efficiency, they tend to discard informative complementary kernels. To improve accuracy, we introduce smoothness to the optimization procedure. Furthermore, we transform the optimization into a single smooth convex optimization problem and employ the Nesterov’s method to efficiently solve the optimization problem. Experiments on benchmark data sets demonstrate that the proposed algorithm clearly improves current MKL methods in a number scenarios.

Downloads

Published

2010-07-03

How to Cite

Xu, Z., Jin, R., Zhu, S., Lyu, M., & King, I. (2010). Smooth Optimization for Effective Multiple Kernel Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 24(1), 637-642. https://doi.org/10.1609/aaai.v24i1.7675