Multi-View Randomized Kernel Classification via Nonconvex Optimization

Authors

  • Xiaojian Ding Nanjing University of Finance and Economics
  • Fan Yang Nanjing University of Finance and Economics

DOI:

https://doi.org/10.1609/aaai.v38i10.29064

Keywords:

ML: Classification and Regression, ML: Kernel Methods, ML: Multi-instance/Multi-view Learning, SO: Non-convex Optimization

Abstract

Multi kernel learning (MKL) is a representative supervised multi-view learning method widely applied in multi-modal and multi-view applications. MKL aims to classify data by integrating complementary information from predefined kernels. Although existing MKL methods achieve promising performance, they fail to consider the tradeoff between diversity and classification accuracy of kernels, preventing further improvement of classification performance. In this paper, we tackle this problem by generating a number of high-quality base learning kernels and selecting a kernel subset with maximum pairwise diversity and minimum generalization errors. We first formulate this idea as a nonconvex quadratic integer programming problem. Then we transform this nonconvex problem into a convex optimization problem and prove it is equivalent to a semidefinite relaxation problem, which a semidefinite-based branch-and-bound algorithm can quickly solve. Experimental results on the real-world datasets demonstrate the superiority of the proposed method. The results also show that our method works for the support vector machine (SVM) classifier and other state-of-the-art kernel classifiers.

Downloads

Published

2024-03-24

How to Cite

Ding, X., & Yang, F. (2024). Multi-View Randomized Kernel Classification via Nonconvex Optimization. Proceedings of the AAAI Conference on Artificial Intelligence, 38(10), 11793–11801. https://doi.org/10.1609/aaai.v38i10.29064

Issue

Section

AAAI Technical Track on Machine Learning I