Corruption-Tolerant Algorithms for Generalized Linear Models

Authors

  • Bhaskar Mukhoty Mohamed Bin Zayed University of Artificial Intelligence, Abu Dhabi, UAE
  • Debojyoti Dey Indian Institute of Technology Kanpur, Uttar Pradesh, India
  • Purushottam Kar Indian Institute of Technology Kanpur, Uttar Pradesh, India

DOI:

https://doi.org/10.1609/aaai.v37i8.26108

Keywords:

ML: Learning Theory, ML: Adversarial Learning & Robustness, ML: Classification and Regression, ML: Optimization

Abstract

This paper presents SVAM (Sequential Variance-Altered MLE), a unified framework for learning generalized linear models under adversarial label corruption in training data. SVAM extends to tasks such as least squares regression, logistic regression, and gamma regression, whereas many existing works on learning with label corruptions focus only on least squares regression. SVAM is based on a novel variance reduction technique that may be of independent interest and works by iteratively solving weighted MLEs over variance-altered versions of the GLM objective. SVAM offers provable model recovery guarantees superior to the state-of-the-art for robust regression even when a constant fraction of training labels are adversarially corrupted. SVAM also empirically outperforms several existing problem-specific techniques for robust regression and classification. Code for SVAM is available at https://github.com/purushottamkar/svam/

Downloads

Published

2023-06-26

How to Cite

Mukhoty, B., Dey, D., & Kar, P. (2023). Corruption-Tolerant Algorithms for Generalized Linear Models. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9243-9250. https://doi.org/10.1609/aaai.v37i8.26108

Issue

Section

AAAI Technical Track on Machine Learning III