Adversarial Robustness in Multi-Task Learning: Promises and Illusions

Authors

  • Salah Ghamizi University of Luxembourg
  • Maxime Cordy University of Luxembourg
  • Mike Papadakis University of Luxembourg
  • Yves Le Traon University of Luxembourg

DOI:

https://doi.org/10.1609/aaai.v36i1.19950

Keywords:

Computer Vision (CV), Machine Learning (ML)

Abstract

Vulnerability to adversarial attacks is a well-known weakness of Deep Neural networks. While most of the studies focus on single-task neural networks with computer vision datasets, very little research has considered complex multi-task models that are common in real applications. In this paper, we evaluate the design choices that impact the robustness of multi-task deep learning networks. We provide evidence that blindly adding auxiliary tasks, or weighing the tasks provides a false sense of robustness. Thereby, we tone down the claim made by previous research and study the different factors which may affect robustness. In particular, we show that the choice of the task to incorporate in the loss function are important factors that can be leveraged to yield more robust models. We provide the appendix, all our algorithms, models, and open source-code at https://github.com/yamizi/taskaugment

Downloads

Published

2022-06-28

How to Cite

Ghamizi, S., Cordy, M., Papadakis, M., & Traon, Y. L. (2022). Adversarial Robustness in Multi-Task Learning: Promises and Illusions. Proceedings of the AAAI Conference on Artificial Intelligence, 36(1), 697-705. https://doi.org/10.1609/aaai.v36i1.19950

Issue

Section

AAAI Technical Track on Computer Vision I