Losses over Labels: Weakly Supervised Learning via Direct Loss Construction

Authors

  • Dylan Sam Carnegie Mellon University
  • J. Zico Kolter Carnegie Mellon University Bosch Center for Artificial Intelligence

DOI:

https://doi.org/10.1609/aaai.v37i8.26159

Keywords:

ML: Unsupervised & Self-Supervised Learning, ML: Semi-Supervised Learning, ML: Multi-Class/Multi-Label Learning & Extreme Classification

Abstract

Owing to the prohibitive costs of generating large amounts of labeled data, programmatic weak supervision is a growing paradigm within machine learning. In this setting, users design heuristics that provide noisy labels for subsets of the data. These weak labels are combined (typically via a graphical model) to form pseudolabels, which are then used to train a downstream model. In this work, we question a foundational premise of the typical weakly supervised learning pipeline: given that the heuristic provides all “label” information, why do we need to generate pseudolabels at all? Instead, we propose to directly transform the heuristics themselves into corresponding loss functions that penalize differences between our model and the heuristic. By constructing losses directly from the heuristics, we can incorporate more information than is used in the standard weakly supervised pipeline, such as how the heuristics make their decisions, which explicitly informs feature selection during training. We call our method Losses over Labels (LoL) as it creates losses directly from heuristics without going through the intermediate step of a label. We show that LoL improves upon existing weak supervision methods on several benchmark text and image classification tasks and further demonstrate that incorporating gradient information leads to better performance on almost every task.

Downloads

Published

2023-06-26

How to Cite

Sam, D., & Kolter, J. Z. (2023). Losses over Labels: Weakly Supervised Learning via Direct Loss Construction. Proceedings of the AAAI Conference on Artificial Intelligence, 37(8), 9695-9703. https://doi.org/10.1609/aaai.v37i8.26159

Issue

Section

AAAI Technical Track on Machine Learning III