Learning Multi-Level Task Groups in Multi-Task Learning

Authors

  • Lei Han Hong Kong Baptist University
  • Yu Zhang Hong Kong Baptist University

DOI:

https://doi.org/10.1609/aaai.v29i1.9581

Keywords:

Multi-Task Learning, Task Grouping, Multi-Level

Abstract

In multi-task learning (MTL), multiple related tasks are learned jointly by sharing information across them. Many MTL algorithms have been proposed to learn the underlying task groups. However, those methods are limited to learn the task groups at only a single level, which may be not sufficient to model the complex structure among tasks in many real-world applications. In this paper, we propose a Multi-Level Task Grouping (MeTaG) method to learn the multi-level grouping structure instead of only one level among tasks. Specifically, by assuming the number of levels to be H, we decompose the parameter matrix into a sum of H component matrices, each of which is regularized with a l2 norm on the pairwise difference among parameters of all the tasks to construct level-specific task groups. For optimization, we employ the smoothing proximal gradient method to efficiently solve the objective function of the MeTaG model. Moreover, we provide theoretical analysis to show that under certain conditions the MeTaG model can recover the true parameter matrix and the true task groups in each level with high probability. We experiment our approach on both synthetic and real-world datasets, showing competitive performance over state-of-the-art MTL methods.

Downloads

Published

2015-02-21

How to Cite

Han, L., & Zhang, Y. (2015). Learning Multi-Level Task Groups in Multi-Task Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 29(1). https://doi.org/10.1609/aaai.v29i1.9581

Issue

Section

Main Track: Novel Machine Learning Algorithms