Multi-task Learning by Leveraging the Semantic Information

Authors

  • Fan Zhou Laval University
  • Brahim Chaib-draa Laval University
  • Boyu Wang University of Western Ontario Vector Institute

Keywords:

Transfer/Adaptation/Multi-task/Meta/Automated Learning

Abstract

One crucial objective of multi-task learning is to align distributions across tasks so that the information between them can be transferred and shared. However, existing approaches only focused on matching the marginal feature distribution while ignoring the semantic information, which may hinder the learning performance. To address this issue, we propose to leverage the label information in multi-task learning by exploring the semantic conditional relations among tasks. We first theoretically analyze the generalization bound of multi-task learning based on the notion of Jensen-Shannon divergence, which provides new insights into the value of label information in multi-task learning. Our analysis also leads to a concrete algorithm that jointly matches the semantic distribution and controls label distribution divergence. To confirm the effectiveness of the proposed method, we first compare the algorithm with several baselines on some benchmarks and then test the algorithms under label space shift conditions. Empirical results demonstrate that the proposed method could outperform most baselines and achieve state-of-the-art performance, particularly showing the benefits under the label shift conditions.

Downloads

Published

2021-05-18

How to Cite

Zhou, F., Chaib-draa, B., & Wang, B. (2021). Multi-task Learning by Leveraging the Semantic Information. Proceedings of the AAAI Conference on Artificial Intelligence, 35(12), 11088-11096. Retrieved from https://ojs.aaai.org/index.php/AAAI/article/view/17323

Issue

Section

AAAI Technical Track on Machine Learning V