Multi-Task Learning and Algorithmic Stability

Authors

  • Yu Zhang Hong Kong Baptist University

DOI:

https://doi.org/10.1609/aaai.v29i1.9558

Keywords:

Multi-Task Learning, Stability

Abstract

In this paper, we study multi-task algorithms from the perspective of the algorithmic stability. We give a definition of the multi-task uniform stability, a generalization of the conventional uniform stability, which measures the maximum difference between the loss of a multi-task algorithm trained on a data set and that of the multi-task algorithm trained on the same data set but with a data point removed in each task. In order to analyze multi-task algorithms based on multi-task uniform stability, we prove a generalized McDiarmid's inequality which assumes the difference bound condition holds by changing multiple input arguments instead of only one in the conventional McDiarmid's inequality. By using the generalized McDiarmid's inequality as a tool, we can analyze the generalization performance of general multi-task algorithms in terms of the multi-task uniform stability. Moreover, as applications, we prove generalization bounds of several representative regularized multi-task algorithms.

Downloads

Published

2015-02-21

How to Cite

Zhang, Y. (2015). Multi-Task Learning and Algorithmic Stability. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9558

Issue

Section

Main Track: Novel Machine Learning Algorithms