The Role of Dimensionality Reduction in Classification

Authors

  • Weiran Wang TTI Chicago
  • Miguel Carreira-Perpinan University of California, Merced

DOI:

https://doi.org/10.1609/aaai.v28i1.8975

Keywords:

nonconvex optimization, classification, wrapper approaches

Abstract

Dimensionality reduction (DR) is often used as a preprocessing step in classification, but usually one first fixes the DR mapping, possibly using label information, and then learns a classifier (a filter approach). Best performance would be obtained by optimizing the classification error jointly over DR mapping and classifier (a wrapper approach), but this is a difficult nonconvex problem, particularly with nonlinear DR. Using the method of auxiliary coordinates, we give a simple, efficient algorithm to train a combination of nonlinear DR and a classifier, and apply it to a RBF mapping with a linear SVM. This alternates steps where we train the RBF mapping and a linear SVM as usual regression and classification, respectively, with a closed-form step that coordinates both. The resulting nonlinear low-dimensional classifier achieves classification errors competitive with the state-of-the-art but is fast at training and testing, and allows the user to trade off runtime for classification accuracy easily. We then study the role of nonlinear DR in linear classification, and the interplay between the DR mapping, the number of latent dimensions and the number of classes. When trained jointly, the DR mapping takes an extreme role in eliminating variation: it tends to collapse classes in latent space, erasing all manifold structure, and lay out class centroids so they are linearly separable with maximum margin.

Downloads

Published

2014-06-21

How to Cite

Wang, W., & Carreira-Perpinan, M. (2014). The Role of Dimensionality Reduction in Classification. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1). https://doi.org/10.1609/aaai.v28i1.8975

Issue

Section

Main Track: Novel Machine Learning Algorithms