G-Optimal Design with Laplacian Regularization

Authors

  • Chun Chen Zhejiang University
  • Zhengguang Chen Zhejiang University
  • Jiajun Bu Zhejiang University
  • Can Wang Zhejiang University
  • Lijun Zhang Zhejiang University
  • Cheng Zhang China Disabled Persons' Federation Information Center

DOI:

https://doi.org/10.1609/aaai.v24i1.7672

Keywords:

Active Learning, Classification, Kernel Methods

Abstract

In many real world applications, labeled data are usually expensive to get, while there may be a large amount of unlabeled data. To reduce the labeling cost, active learning attempts to discover the most informative data points for labeling. Recently, Optimal Experimental Design (OED) techniques have attracted an increasing amount of attention. OED is concerned with the design of experiments that minimizes variances of a parameterized model. Typical design criteria include D-, A-, and E-optimality. However, all these criteria are based on an ordinary linear regression model which aims to minimize the empirical error whereas the geometrical structure of the data space is not well respected. In this paper, we propose a novel optimal experimental design approach for active learning, called Laplacian G-Optimal Design (LapGOD), which considers both discriminating and geometrical structures. By using Laplacian Regularized Least Squares which incorporates manifold regularization into linear regression, our proposed algorithm selects those data points that minimizes the maximum variance of the predicted values on the data manifold. We also extend our algorithm to nonlinear case by using kernel trick. The experimental results on various image databases have shown that our proposed LapGOD active learning algorithm can significantly enhance the classification accuracy if the selected data points are used as training data.

Downloads

Published

2010-07-03

How to Cite

Chen, C., Chen, Z., Bu, J., Wang, C., Zhang, L., & Zhang, C. (2010). G-Optimal Design with Laplacian Regularization. Proceedings of the AAAI Conference on Artificial Intelligence, 24(1), 413-418. https://doi.org/10.1609/aaai.v24i1.7672