Veto-Consensus Multiple Kernel Learning

Authors

  • Yuxun Zhou University of California, Berkeley
  • Ninghang Hu University of Amsterdam
  • Costas Spanos University of California, Berkeley

DOI:

https://doi.org/10.1609/aaai.v30i1.10251

Keywords:

Multiple Kernel Learning, Consensus Learning, Global Optimization

Abstract

We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The proposed configuration is a natural fit for domain description and learning with hidden subgroups. We first provide generalization risk bound in terms of the Rademacher complexity of the classifier, and then a large margin multi-ν learning objective with tunable training error bound is formulated. Seeing that the corresponding optimization is non-convex and existing methods severely suffer from local minima, we establish a new algorithm, namely Parametric Dual Descent Procedure (PDDP) that can approach global optimum with guarantees. The bases of PDDP are two theorems that reveal the global convexity and local explicitness of the parameterized dual optimum, for which a series of new techniques for parametric program have been developed. The proposed method is evaluated on extensive set of experiments, and the results show significant improvement over the state-of-the-art approaches.

Downloads

Published

2016-03-02

How to Cite

Zhou, Y., Hu, N., & Spanos, C. (2016). Veto-Consensus Multiple Kernel Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10251

Issue

Section

Technical Papers: Machine Learning Methods