Adversarial Learning from Crowds

Authors

  • Pengpeng Chen SKLSDE Lab, School of Computer Science and Engineering, Beihang University, Beijing, China Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, China
  • Hailong Sun SKLSDE Lab, School of Software, Beihang University, Beijing, China Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, China
  • Yongqiang Yang SKLSDE Lab, School of Computer Science and Engineering, Beihang University, Beijing, China Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, China
  • Zhijun Chen SKLSDE Lab, School of Computer Science and Engineering, Beihang University, Beijing, China Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, China

DOI:

https://doi.org/10.1609/aaai.v36i5.20467

Keywords:

Humans And AI (HAI), Machine Learning (ML), Data Mining & Knowledge Management (DMKM), Search And Optimization (SO)

Abstract

Learning from Crowds (LFC) seeks to induce a high-quality classifier from training instances, which are linked to a range of possible noisy annotations from crowdsourcing workers under their various levels of skills and their own preconditions. Recent studies on LFC focus on designing new methods to improve the performance of the classifier trained from crowdsourced labeled data. To this day, however, there remain under-explored security aspects of LFC systems. In this work, we seek to bridge this gap. We first show that LFC models are vulnerable to adversarial examples---small changes to input data can cause classifiers to make prediction mistakes. Second, we propose an approach, A-LFC for training a robust classifier from crowdsourced labeled data. Our empirical results on three real-world datasets show that the proposed approach can substantially improve the performance of the trained classifier even with the existence of adversarial examples. On average, A-LFC has 10.05% and 11.34% higher test robustness than the state-of-the-art in the white-box and black-box attack settings, respectively.

Downloads

Published

2022-06-28

How to Cite

Chen, P., Sun, H., Yang, Y., & Chen, Z. (2022). Adversarial Learning from Crowds. Proceedings of the AAAI Conference on Artificial Intelligence, 36(5), 5304-5312. https://doi.org/10.1609/aaai.v36i5.20467

Issue

Section

AAAI Technical Track on Humans and AI