GLISTER: Generalization based Data Subset Selection for Efficient and Robust Learning

Authors

  • Krishnateja Killamsetty University of Texas at Dallas
  • Durga Sivasubramanian Indian Institute of Technology, Bombay
  • Ganesh Ramakrishnan Indian Institute of Technology, Bombay
  • Rishabh Iyer University of Texas at Dallas Indian Institute of Technology, Bombay

DOI:

https://doi.org/10.1609/aaai.v35i9.16988

Keywords:

Optimization

Abstract

Large scale machine learning and deep models are extremely data-hungry. Unfortunately, obtaining large amounts of labeled data is expensive, and training state-of-the-art models (with hyperparameter tuning) requires significant computing resources and time. Secondly, real-world data is noisy and imbalanced. As a result, several recent papers try to make the training process more efficient and robust. However, most existing work either focuses on robustness or efficiency, but not both. In this work, we introduce GLISTER, a GeneraLIzation based data Subset selecTion for Efficient and Robust learning framework. We formulate GLISTER as a mixed discrete-continuous bi-level optimization problem to select a subset of the training data, which maximizes the log-likelihood on a held-out validation set. We then analyze GLISTER for simple classifiers such as gaussian and multinomial naive-bayes, k-nearest neighbor classifier, and linear regression and show connections to submodularity. Next, we propose an iterative online algorithm GLISTER-ONLINE, which performs data selection iteratively along with the parameter updates, and can be applied to any loss-based learning algorithm. We then show that for a rich class of loss functions including cross-entropy, hinge-loss, squared-loss, and logistic-loss, the inner discrete data selection is an instance of (weakly) submodular optimization, and we analyze conditions for which GLISTER-ONLINE reduces the validation loss and converges. Finally, we propose GLISTER-ACTIVE, an extension to batch active learning, and we empirically demonstrate the performance of GLISTER on a wide range of tasks including, (a) data selection to reduce training time, (b) robust learning under label noise and imbalance settings, and (c) batch-active learning with a number of deep and shallow models. We show that our framework improves upon the state of the art both in efficiency and accuracy (in cases (a) and (c)) and is more efficient compared to other state-of-the-art robust learning algorithms in case (b). The code for GLISTER is at: https://github.com/dssresearch/GLISTER.

Downloads

Published

2021-05-18

How to Cite

Killamsetty, K., Sivasubramanian, D., Ramakrishnan, G., & Iyer, R. (2021). GLISTER: Generalization based Data Subset Selection for Efficient and Robust Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 35(9), 8110-8118. https://doi.org/10.1609/aaai.v35i9.16988

Issue

Section

AAAI Technical Track on Machine Learning II