Learning Sparse Confidence-Weighted Classifier on Very High Dimensional Data

Authors

  • Mingkui Tan University of Adelaide
  • Yan Yan University of Technology Sydney
  • Li Wang University of Illinois at Chicago
  • Anton Van Den Hengel University of Adelaide
  • Ivor W. Tsang University of Technology Sydney
  • Qinfeng (Javen) Shi University of Adelaide

DOI:

https://doi.org/10.1609/aaai.v30i1.10281

Keywords:

Online learning, Confidence-weighted learning, High Dimensional Data, block diagonal covariance

Abstract

Confidence-weighted (CW) learning is a successful online learning paradigm which maintains a Gaussian distribution over classifier weights and adopts a covariancematrix to represent the uncertainties of the weight vectors. However, there are two deficiencies in existing full CW learning paradigms, these being the sensitivity to irrelevant features, and the poor scalability to high dimensional data due to the maintenance of the covariance structure. In this paper, we begin by presenting an online-batch CW learning scheme, and then present a novel paradigm to learn sparse CW classifiers. The proposed paradigm essentially identifies feature groups and naturally builds a block diagonal covariance structure, making it very suitable for CW learning over very high-dimensional data.Extensive experimental results demonstrate the superior performance of the proposed methods over state-of-the-art counterparts on classification and feature selection tasks.

Downloads

Published

2016-03-02

How to Cite

Tan, M., Yan, Y., Wang, L., Hengel, A. V. D., Tsang, I. W., & Shi, Q. (Javen). (2016). Learning Sparse Confidence-Weighted Classifier on Very High Dimensional Data. Proceedings of the AAAI Conference on Artificial Intelligence, 30(1). https://doi.org/10.1609/aaai.v30i1.10281

Issue

Section

Technical Papers: Machine Learning Methods