Encoding Tree Sparsity in Multi-Task Learning: A Probabilistic Framework

Authors

  • Lei Han Peking University
  • Yu Zhang Hong Kong Baptist University
  • Guojie Song Peking University
  • Kunqing Xie Peking University

DOI:

https://doi.org/10.1609/aaai.v28i1.9009

Keywords:

Multi-Task Learning, Sparsity, Probabilistic Modeling

Abstract

Multi-task learning seeks to improve the generalization performance by sharing common information among multiple related tasks. A key assumption in most MTL algorithms is that all tasks are related, which, however, may not hold in many real-world applications. Existing techniques, which attempt to address this issue, aim to identify groups of related tasks using group sparsity. In this paper, we propose a probabilistic tree sparsity (PTS) model to utilize the tree structure to obtain the sparse solution instead of the group structure. Specifically, each model coefficient in the learning model is decomposed into a product of multiple component coefficients each of which corresponds to a node in the tree. Based on the decomposition, Gaussian and Cauchy distributions are placed on the component coefficients as priors to restrict the model complexity. We devise an efficient expectation maximization algorithm to learn the model parameters. Experiments conducted on both synthetic and real-world problems show the effectiveness of our model compared with state-of-the-art baselines.

Downloads

Published

2014-06-21

How to Cite

Han, L., Zhang, Y., Song, G., & Xie, K. (2014). Encoding Tree Sparsity in Multi-Task Learning: A Probabilistic Framework. Proceedings of the AAAI Conference on Artificial Intelligence, 28(1). https://doi.org/10.1609/aaai.v28i1.9009

Issue

Section

Main Track: Novel Machine Learning Algorithms