Self-Paced Multi-Task Learning

Authors

  • Changsheng Li East China Normal University
  • Junchi Yan East China Normal University
  • Fan Wei Stanford University
  • Weishan Dong IBM Research - China
  • Qingshan Liu Nanjing University of Information Science and Technology
  • Hongyuan Zha East China Normal University

DOI:

https://doi.org/10.1609/aaai.v31i1.10847

Keywords:

multi-task learning

Abstract

Multi-task learning is a paradigm, where multiple tasks are jointly learnt. Previous multi-task learning models usually treat all tasks and instances per task equally during learning. Inspired by the fact that humans often learn from easy concepts to hard ones in the cognitive process, in this paper, we propose a novel multi-task learning framework that attempts to learn the tasks by simultaneously taking into consideration the complexities of both tasks and instances per task. We propose a novel formulation by presenting a new task-oriented regularizer that can jointly prioritize tasks and instances.Thus it can be interpreted as a self-paced learner for multi-task learning. An efficient block coordinate descent algorithm is developed to solve the proposed objective function, and the convergence of the algorithm can be guaranteed. Experimental results on the toy and real-world datasets demonstrate the effectiveness of the proposed approach, compared to the state-of-the-arts.

Downloads

Published

2017-02-13

How to Cite

Li, C., Yan, J., Wei, F., Dong, W., Liu, Q., & Zha, H. (2017). Self-Paced Multi-Task Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 31(1). https://doi.org/10.1609/aaai.v31i1.10847