Data Poisoning Attacks on Multi-Task Relationship Learning


  • Mengchen Zhao Nanyang Technological University
  • Bo An Nanyang Technological University
  • Yaodong Yu Nanyang Technological University
  • Sulin Liu Nanyang Technological University
  • Sinno Pan Nanyang Technological University


data poisoning, multi-task learning


Multi-task learning (MTL) is a machine learning paradigm that improves the performance of each task by exploiting useful information contained in multiple related tasks. However, the relatedness of tasks can be exploited by attackers to launch data poisoning attacks, which has been demonstrated a big threat to single-task learning. In this paper, we provide the first study on the vulnerability of MTL. Specifically, we focus on multi-task relationship learning (MTRL) models, a popular subclass of MTL models where task relationships are quantized and are learned directly from training data. We formulate the problem of computing optimal poisoning attacks on MTRL as a bilevel program that is adaptive to arbitrary choice of target tasks and attacking tasks. We propose an efficient algorithm called PATOM for computing optimal attack strategies. PATOM leverages the optimality conditions of the subproblem of MTRL to compute the implicit gradients of the upper level objective function. Experimental results on real-world datasets show that MTRL models are very sensitive to poisoning attacks and the attacker can significantly degrade the performance of target tasks, by either directly poisoning the target tasks or indirectly poisoning the related tasks exploiting the task relatedness. We also found that the tasks being attacked are always strongly correlated, which provides a clue for defending against such attacks.




How to Cite

Zhao, M., An, B., Yu, Y., Liu, S., & Pan, S. (2018). Data Poisoning Attacks on Multi-Task Relationship Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). Retrieved from



Main Track: Machine Learning Applications