On the Relation between Distributionally Robust Optimization and Data Curation (Student Abstract)

Authors

  • Agnieszka Słowik University of Cambridge
  • Léon Bottou Facebook AI Research
  • Sean B. Holden Unversity of Cambridge
  • Mateja Jamnik University of Cambridge

DOI:

https://doi.org/10.1609/aaai.v36i11.21663

Keywords:

Distributionally Robust Optimization, Machine Learning, Optimization, Adversarial Learning

Abstract

Machine learning systems based on minimizing average error have been shown to perform inconsistently across notable subsets of the data, which is not exposed by a low average error for the entire dataset. In consequential social and economic applications, where data represent people, this can lead to discrimination of underrepresented gender and ethnic groups. Distributionally Robust Optimization (DRO) seemingly addresses this problem by minimizing the worst expected risk across subpopulations. We establish theoretical results that clarify the relation between DRO and the optimization of the same loss averaged on an adequately weighted training dataset. A practical implication of our results is that neither DRO nor curating the training set should be construed as a complete solution for bias mitigation.

Downloads

Published

2022-06-28

How to Cite

Słowik, A., Bottou, L., Holden, S. B., & Jamnik, M. (2022). On the Relation between Distributionally Robust Optimization and Data Curation (Student Abstract). Proceedings of the AAAI Conference on Artificial Intelligence, 36(11), 13053-13054. https://doi.org/10.1609/aaai.v36i11.21663